LDRD Final Report: Global Optimization for Engineering Science Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
HART,WILLIAM E.
1999-12-01
For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.
Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm
NASA Astrophysics Data System (ADS)
Kania, Adhe; Sidarto, Kuntjoro Adji
2016-02-01
Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.
NASA Astrophysics Data System (ADS)
Aittokoski, Timo; Miettinen, Kaisa
2008-07-01
Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.
Conceptual Comparison of Population Based Metaheuristics for Engineering Problems
Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265
Conceptual comparison of population based metaheuristics for engineering problems.
Adekanmbi, Oluwole; Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.
A Comparative Study of Optimization Algorithms for Engineering Synthesis.
1983-03-01
the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first
Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Optimization applications in aircraft engine design and test
NASA Technical Reports Server (NTRS)
Pratt, T. K.
1984-01-01
Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.
NASA Technical Reports Server (NTRS)
Riehl, John P.; Sjauw, Waldy K.
2004-01-01
Trajectory, mission, and vehicle engineers concern themselves with finding the best way for an object to get from one place to another. These engineers rely upon special software to assist them in this. For a number of years, many engineers have used the OTIS program for this assistance. With OTIS, an engineer can fully optimize trajectories for airplanes, launch vehicles like the space shuttle, interplanetary spacecraft, and orbital transfer vehicles. OTIS provides four modes of operation, with each mode providing successively stronger optimization capability. The most powerful mode uses a mathematical method called implicit integration to solve what engineers and mathematicians call the optimal control problem. OTIS 3.2, which was developed at the NASA Glenn Research Center, is the latest release of this industry workhorse and features new capabilities for parameter optimization and mission design. OTIS stands for Optimal Control by Implicit Simulation, and it is implicit integration that makes OTIS so powerful at solving trajectory optimization problems. Why is this so important? The optimization process not only determines how to get from point A to point B, but it can also determine how to do this with the least amount of propellant, with the lightest starting weight, or in the fastest time possible while avoiding certain obstacles along the way. There are numerous conditions that engineers can use to define optimal, or best. OTIS provides a framework for defining the starting and ending points of the trajectory (point A and point B), the constraints on the trajectory (requirements like "avoid these regions where obstacles occur"), and what is being optimized (e.g., minimize propellant). The implicit integration method can find solutions to very complicated problems when there is not a lot of information available about what the optimal trajectory might be. The method was first developed for solving two-point boundary value problems and was adapted for use in OTIS. Implicit integration usually allows OTIS to find solutions to problems much faster than programs that use explicit integration and parametric methods. Consequently, OTIS is best suited to solving very complicated and highly constrained problems.
A General-Purpose Optimization Engine for Multi-Disciplinary Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A general purpose optimization tool for multidisciplinary applications, which in the literature is known as COMETBOARDS, is being developed at NASA Lewis Research Center. The modular organization of COMETBOARDS includes several analyzers and state-of-the-art optimization algorithms along with their cascading strategy. The code structure allows quick integration of new analyzers and optimizers. The COMETBOARDS code reads input information from a number of data files, formulates a design as a set of multidisciplinary nonlinear programming problems, and then solves the resulting problems. COMETBOARDS can be used to solve a large problem which can be defined through multiple disciplines, each of which can be further broken down into several subproblems. Alternatively, a small portion of a large problem can be optimized in an effort to improve an existing system. Some of the other unique features of COMETBOARDS include design variable formulation, constraint formulation, subproblem coupling strategy, global scaling technique, analysis approximation, use of either sequential or parallel computational modes, and so forth. The special features and unique strengths of COMETBOARDS assist convergence and reduce the amount of CPU time used to solve the difficult optimization problems of aerospace industries. COMETBOARDS has been successfully used to solve a number of problems, including structural design of space station components, design of nozzle components of an air-breathing engine, configuration design of subsonic and supersonic aircraft, mixed flow turbofan engines, wave rotor topped engines, and so forth. This paper introduces the COMETBOARDS design tool and its versatility, which is illustrated by citing examples from structures, aircraft design, and air-breathing propulsion engine design.
Computational alternatives to obtain time optimal jet engine control. M.S. Thesis
NASA Technical Reports Server (NTRS)
Basso, R. J.; Leake, R. J.
1976-01-01
Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Welding As Science: Applying Basic Engineering Principles to the Discipline
NASA Technical Reports Server (NTRS)
Nunes, A. C., Jr.
2010-01-01
This Technical Memorandum provides sample problems illustrating ways in which basic engineering science has been applied to the discipline of welding. Perhaps inferences may be drawn regarding optimal approaches to particular welding problems, as well as for the optimal education for welding engineers. Perhaps also some readers may be attracted to the science(s) of welding and may make worthwhile contributions to the discipline.
NASA Technical Reports Server (NTRS)
1995-01-01
The design of a High-Speed Civil Transport (HSCT) air-breathing propulsion system for multimission, variable-cycle operations was successfully optimized through a soft coupling of the engine performance analyzer NASA Engine Performance Program (NEPP) to a multidisciplinary optimization tool COMETBOARDS that was developed at the NASA Lewis Research Center. The design optimization of this engine was cast as a nonlinear optimization problem, with engine thrust as the merit function and the bypass ratios, r-values of fans, fuel flow, and other factors as important active design variables. Constraints were specified on factors including the maximum speed of the compressors, the positive surge margins for the compressors with specified safety factors, the discharge temperature, the pressure ratios, and the mixer extreme Mach number. Solving the problem by using the most reliable optimization algorithm available in COMETBOARDS would provide feasible optimum results only for a portion of the aircraft flight regime because of the large number of mission points (defined by altitudes, Mach numbers, flow rates, and other factors), diverse constraint types, and overall poor conditioning of the design space. Only the cascade optimization strategy of COMETBOARDS, which was devised especially for difficult multidisciplinary applications, could successfully solve a number of engine design problems for their flight regimes. Furthermore, the cascade strategy converged to the same global optimum solution even when it was initiated from different design points. Multiple optimizers in a specified sequence, pseudorandom damping, and reduction of the design space distortion via a global scaling scheme are some of the key features of the cascade strategy. HSCT engine concept, optimized solution for HSCT engine concept. A COMETBOARDS solution for an HSCT engine (Mach-2.4 mixed-flow turbofan) along with its configuration is shown. The optimum thrust is normalized with respect to NEPP results. COMETBOARDS added value in the design optimization of the HSCT engine.
Genetic-evolution-based optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.
1990-01-01
This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Enhanced and Conventional Project-Based Learning in an Engineering Design Module
ERIC Educational Resources Information Center
Chua, K. J.; Yang, W. M.; Leo, H. L.
2014-01-01
Engineering education focuses chiefly on students' ability to solve problems. While most engineering students are proficient in solving paper questions, they may not be proficient at providing optimal solutions to pragmatic project-based problems that require systematic learning strategy, innovation, problem-solving, and execution. The…
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Cascade Optimization Strategy for Aircraft and Air-Breathing Propulsion System Concepts
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Lavelle, Thomas M.; Hopkins, Dale A.; Coroneos, Rula M.
1996-01-01
Design optimization for subsonic and supersonic aircraft and for air-breathing propulsion engine concepts has been accomplished by soft-coupling the Flight Optimization System (FLOPS) and the NASA Engine Performance Program analyzer (NEPP), to the NASA Lewis multidisciplinary optimization tool COMETBOARDS. Aircraft and engine design problems, with their associated constraints and design variables, were cast as nonlinear optimization problems with aircraft weight and engine thrust as the respective merit functions. Because of the diversity of constraint types and the overall distortion of the design space, the most reliable single optimization algorithm available in COMETBOARDS could not produce a satisfactory feasible optimum solution. Some of COMETBOARDS' unique features, which include a cascade strategy, variable and constraint formulations, and scaling devised especially for difficult multidisciplinary applications, successfully optimized the performance of both aircraft and engines. The cascade method has two principal steps: In the first, the solution initiates from a user-specified design and optimizer, in the second, the optimum design obtained in the first step with some random perturbation is used to begin the next specified optimizer. The second step is repeated for a specified sequence of optimizers or until a successful solution of the problem is achieved. A successful solution should satisfy the specified convergence criteria and have several active constraints but no violated constraints. The cascade strategy available in the combined COMETBOARDS, FLOPS, and NEPP design tool converges to the same global optimum solution even when it starts from different design points. This reliable and robust design tool eliminates manual intervention in the design of aircraft and of air-breathing propulsion engines where it eases the cycle analysis procedures. The combined code is also much easier to use, which is an added benefit. This paper describes COMETBOARDS and its cascade strategy and illustrates the capability of the combined design tool through the optimization of a subsonic aircraft and a high-bypass-turbofan wave-rotor-topped engine.
A new chaotic multi-verse optimization algorithm for solving engineering optimization problems
NASA Astrophysics Data System (ADS)
Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella
2018-03-01
Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.
Class and Home Problems: Optimization Problems
ERIC Educational Resources Information Center
Anderson, Brian J.; Hissam, Robin S.; Shaeiwitz, Joseph A.; Turton, Richard
2011-01-01
Optimization problems suitable for all levels of chemical engineering students are available. These problems do not require advanced mathematical techniques, since they can be solved using typical software used by students and practitioners. The method used to solve these problems forces students to understand the trends for the different terms…
OPTIMIZATION OF COUNTERCURRENT STAGED PROCESSES.
CHEMICAL ENGINEERING , OPTIMIZATION), (*DISTILLATION, OPTIMIZATION), INDUSTRIAL PRODUCTION, INDUSTRIAL EQUIPMENT, MATHEMATICAL MODELS, DIFFERENCE EQUATIONS, NONLINEAR PROGRAMMING, BOUNDARY VALUE PROBLEMS, NUMERICAL INTEGRATION
Proposal of Evolutionary Simplex Method for Global Optimization Problem
NASA Astrophysics Data System (ADS)
Shimizu, Yoshiaki
To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.
Borescope Inspection Management for Engine
NASA Astrophysics Data System (ADS)
Zhongda, Yuan
2018-03-01
In this paper, we try to explain the problems need to be improved from the two perspectives of maintenance program management and maintenance human risk control. On the basis of optimization analysis of borescope inspection maintenance scheme, the defect characteristics and expansion rules of engine heat terminal components are summarized, and some optimization measures are introduced. This paper analyses human risk problem of engine hole from the aspects of qualification management, training requirements and perfection of system, and puts forward some suggestions on management.
Solving bi-level optimization problems in engineering design using kriging models
NASA Astrophysics Data System (ADS)
Xia, Yi; Liu, Xiaojie; Du, Gang
2018-05-01
Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.
NASA Astrophysics Data System (ADS)
Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr
2016-10-01
The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.
Design and multi-physics optimization of rotary MRF brakes
NASA Astrophysics Data System (ADS)
Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan
2018-03-01
Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.
1998-01-01
A key challenge in designing the new High Speed Civil Transport (HSCT) aircraft is determining a good match between the airframe and engine. Multidisciplinary design optimization can be used to solve the problem by adjusting parameters of both the engine and the airframe. Earlier, an example problem was presented of an HSCT aircraft with four mixed-flow turbofan engines and a baseline mission to carry 305 passengers 5000 nautical miles at a cruise speed of Mach 2.4. The problem was solved by coupling NASA Lewis Research Center's design optimization testbed (COMETBOARDS) with NASA Langley Research Center's Flight Optimization System (FLOPS). The computing time expended in solving the problem was substantial, and the instability of the FLOPS analyzer at certain design points caused difficulties. In an attempt to alleviate both of these limitations, we explored the use of two approximation concepts in the design optimization process. The two concepts, which are based on neural network and linear regression approximation, provide the reanalysis capability and design sensitivity analysis information required for the optimization process. The HSCT aircraft optimization problem was solved by using three alternate approaches; that is, the original FLOPS analyzer and two approximate (derived) analyzers. The approximate analyzers were calibrated and used in three different ranges of the design variables; narrow (interpolated), standard, and wide (extrapolated).
On-Board Real-Time Optimization Control for Turbo-Fan Engine Life Extending
NASA Astrophysics Data System (ADS)
Zheng, Qiangang; Zhang, Haibo; Miao, Lizhen; Sun, Fengyong
2017-11-01
A real-time optimization control method is proposed to extend turbo-fan engine service life. This real-time optimization control is based on an on-board engine mode, which is devised by a MRR-LSSVR (multi-input multi-output recursive reduced least squares support vector regression method). To solve the optimization problem, a FSQP (feasible sequential quadratic programming) algorithm is utilized. The thermal mechanical fatigue is taken into account during the optimization process. Furthermore, to describe the engine life decaying, a thermal mechanical fatigue model of engine acceleration process is established. The optimization objective function not only contains the sub-item which can get fast response of the engine, but also concludes the sub-item of the total mechanical strain range which has positive relationship to engine fatigue life. Finally, the simulations of the conventional optimization control which just consider engine acceleration performance or the proposed optimization method have been conducted. The simulations demonstrate that the time of the two control methods from idle to 99.5 % of the maximum power are equal. However, the engine life using the proposed optimization method could be surprisingly increased by 36.17 % compared with that using conventional optimization control.
Engineering applications of heuristic multilevel optimization methods
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.
1988-01-01
Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.
Engineering applications of heuristic multilevel optimization methods
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.
1989-01-01
Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.
Optimization of a bundle divertor for FED
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hively, L.M.; Rothe, K.E.; Minkoff, M.
1982-01-01
Optimal double-T bundle divertor configurations have been obtained for the Fusion Engineering Device (FED). On-axis ripple is minimized, while satisfying a series of engineering constraints. The ensuing non-linear optimization problem is solved via a sequence of quadratic programming subproblems, using the VMCON algorithm. The resulting divertor designs are substantially improved over previous configurations.
Multiobjective optimization approach: thermal food processing.
Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R
2009-01-01
The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field.
NASA Astrophysics Data System (ADS)
Ivchenko, V. M.; Prikhodko, N. A.; Grigorev, V. A.
1985-12-01
Problems associated with the development of optimal hydrojet engines and hydrojet systems with minimal irreversible losses are reviewed in the light of recent theoretical and experimental studies. In particular, attention is given to the theory of hydrojet propulsion, the hydrodynamics of supercavitating hydrojet engines, hydrojet engines with distributed water intake, and water-gas ramjets. The discussion also covers water-steam jet engines, experimental equipment and methods for testing hydrojet systems, and the principal applications of hydrojet engines.
Generating Alternative Engineering Designs by Integrating Desktop VR with Genetic Algorithms
ERIC Educational Resources Information Center
Chandramouli, Magesh; Bertoline, Gary; Connolly, Patrick
2009-01-01
This study proposes an innovative solution to the problem of multiobjective engineering design optimization by integrating desktop VR with genetic computing. Although, this study considers the case of construction design as an example to illustrate the framework, this method can very much be extended to other engineering design problems as well.…
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin
Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.
NASA Technical Reports Server (NTRS)
Carpenter, William C.
1991-01-01
Engineering optimization problems involve minimizing some function subject to constraints. In areas such as aircraft optimization, the constraint equations may be from numerous disciplines such as transfer of information between these disciplines and the optimization algorithm. They are also suited to problems which may require numerous re-optimizations such as in multi-objective function optimization or to problems where the design space contains numerous local minima, thus requiring repeated optimizations from different initial designs. Their use has been limited, however, by the fact that development of response surfaces randomly selected or preselected points in the design space. Thus, they have been thought to be inefficient compared to algorithms to the optimum solution. A development has taken place in the last several years which may effect the desirability of using response surfaces. It may be possible that artificial neural nets are more efficient in developing response surfaces than polynomial approximations which have been used in the past. This development is the concern of the work.
DTS: Building custom, intelligent schedulers
NASA Technical Reports Server (NTRS)
Hansson, Othar; Mayer, Andrew
1994-01-01
DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.
Evolutionary and biological metaphors for engineering design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakiela, M.
1994-12-31
Since computing became generally available, there has been strong interest in using computers to assist and automate engineering design processes. Specifically, for design optimization and automation, nonlinear programming and artificial intelligence techniques have been extensively studied. New computational techniques, based upon the natural processes of evolution, adaptation, and learing, are showing promise because of their generality and robustness. This presentation will describe the use of two such techniques, genetic algorithms and classifier systems, for a variety of engineering design problems. Structural topology optimization, meshing, and general engineering optimization are shown as example applications.
Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications
2015-06-24
WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Arizona State University School of Mathematical & Statistical Sciences 901 S...SUPPLEMENTARY NOTES 14. ABSTRACT The major goals of this project were completed: the exact solution of previously unsolved challenging combinatorial optimization... combinatorial optimization problem, the Directional Sensor Problem, was solved in two ways. First, heuristically in an engineering fashion and second, exactly
Evolutionary engineering for industrial microbiology.
Vanee, Niti; Fisher, Adam B; Fong, Stephen S
2012-01-01
Superficially, evolutionary engineering is a paradoxical field that balances competing interests. In natural settings, evolution iteratively selects and enriches subpopulations that are best adapted to a particular ecological niche using random processes such as genetic mutation. In engineering desired approaches utilize rational prospective design to address targeted problems. When considering details of evolutionary and engineering processes, more commonality can be found. Engineering relies on detailed knowledge of the problem parameters and design properties in order to predict design outcomes that would be an optimized solution. When detailed knowledge of a system is lacking, engineers often employ algorithmic search strategies to identify empirical solutions. Evolution epitomizes this iterative optimization by continuously diversifying design options from a parental design, and then selecting the progeny designs that represent satisfactory solutions. In this chapter, the technique of applying the natural principles of evolution to engineer microbes for industrial applications is discussed to highlight the challenges and principles of evolutionary engineering.
Advanced Information Technology in Simulation Based Life Cycle Design
NASA Technical Reports Server (NTRS)
Renaud, John E.
2003-01-01
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
Engineering applications of metaheuristics: an introduction
NASA Astrophysics Data System (ADS)
Oliva, Diego; Hinojosa, Salvador; Demeshko, M. V.
2017-01-01
Metaheuristic algorithms are important tools that in recent years have been used extensively in several fields. In engineering, there is a big amount of problems that can be solved from an optimization point of view. This paper is an introduction of how metaheuristics can be used to solve complex problems of engineering. Their use produces accurate results in problems that are computationally expensive. Experimental results support the performance obtained by the selected algorithms in such specific problems as digital filter design, image processing and solar cells design.
Urine sampling and collection system optimization and testing
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Geating, J. A.; Koesterer, M. G.
1975-01-01
A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.
Aerospace applications of integer and combinatorial optimization
NASA Technical Reports Server (NTRS)
Padula, S. L.; Kincaid, R. K.
1995-01-01
Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in solving combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on a large space structure and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.
Exact solution of large asymmetric traveling salesman problems.
Miller, D L; Pekny, J F
1991-02-15
The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.
Sambo, Francesco; de Oca, Marco A Montes; Di Camillo, Barbara; Toffolo, Gianna; Stützle, Thomas
2012-01-01
Reverse engineering is the problem of inferring the structure of a network of interactions between biological variables from a set of observations. In this paper, we propose an optimization algorithm, called MORE, for the reverse engineering of biological networks from time series data. The model inferred by MORE is a sparse system of nonlinear differential equations, complex enough to realistically describe the dynamics of a biological system. MORE tackles separately the discrete component of the problem, the determination of the biological network topology, and the continuous component of the problem, the strength of the interactions. This approach allows us both to enforce system sparsity, by globally constraining the number of edges, and to integrate a priori information about the structure of the underlying interaction network. Experimental results on simulated and real-world networks show that the mixed discrete/continuous optimization approach of MORE significantly outperforms standard continuous optimization and that MORE is competitive with the state of the art in terms of accuracy of the inferred networks.
NASA Astrophysics Data System (ADS)
Long, Kim Chenming
Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.
Topology optimization of unsteady flow problems using the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Nørgaard, Sebastian; Sigmund, Ole; Lazarov, Boyan
2016-02-01
This article demonstrates and discusses topology optimization for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann method, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the optimization problems. The optimization problem is solved with a gradient based method, and the design sensitivities are computed by solving the discrete adjoint problem. For moderate Reynolds number flows, it is demonstrated that topology optimization can successfully account for unsteady effects such as vortex shedding and time-varying boundary conditions. Such effects are relevant in several engineering applications, i.e. fluid pumps and control valves.
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
Vortex generator design for aircraft inlet distortion as a numerical optimization problem
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Levy, Ralph
1991-01-01
Aerodynamic compatibility of aircraft/inlet/engine systems is a difficult design problem for aircraft that must operate in many different flight regimes. Takeoff, subsonic cruise, supersonic cruise, transonic maneuvering, and high altitude loiter each place different constraints on inlet design. Vortex generators, small wing like sections mounted on the inside surfaces of the inlet duct, are used to control flow separation and engine face distortion. The design of vortex generator installations in an inlet is defined as a problem addressable by numerical optimization techniques. A performance parameter is suggested to account for both inlet distortion and total pressure loss at a series of design flight conditions. The resulting optimization problem is difficult since some of the design parameters take on integer values. If numerical procedures could be used to reduce multimillion dollar development test programs to a small set of verification tests, numerical optimization could have a significant impact on both cost and elapsed time to design new aircraft.
A linear decomposition method for large optimization problems. Blueprint for development
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1982-01-01
A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.
NASA Astrophysics Data System (ADS)
Demenev, A. G.
2018-02-01
The present work is devoted to analyze high-performance computing (HPC) infrastructure capabilities for aircraft engine aeroacoustics problems solving at Perm State University. We explore here the ability to develop new computational aeroacoustics methods/solvers for computer-aided engineering (CAE) systems to handle complicated industrial problems of engine noise prediction. Leading aircraft engine engineering company, including “UEC-Aviadvigatel” JSC (our industrial partners in Perm, Russia), require that methods/solvers to optimize geometry of aircraft engine for fan noise reduction. We analysed Perm State University HPC-hardware resources and software services to use efficiently. The performed results demonstrate that Perm State University HPC-infrastructure are mature enough to face out industrial-like problems of development CAE-system with HPC-method and CFD-solvers.
Issues and Strategies in Solving Multidisciplinary Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya
2013-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Chen, Robert T. N.
1996-01-01
This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.
NASA Technical Reports Server (NTRS)
Chen, Guanrong
1991-01-01
An optimal trajectory planning problem for a single-link, flexible joint manipulator is studied. A global feedback-linearization is first applied to formulate the nonlinear inequality-constrained optimization problem in a suitable way. Then, an exact and explicit structural formula for the optimal solution of the problem is derived and the solution is shown to be unique. It turns out that the optimal trajectory planning and control can be done off-line, so that the proposed method is applicable to both theoretical analysis and real time tele-robotics control engineering.
Fundamental differences between optimization code test problems in engineering applications
NASA Technical Reports Server (NTRS)
Eason, E. D.
1984-01-01
The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.
Optimization by nonhierarchical asynchronous decomposition
NASA Technical Reports Server (NTRS)
Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.
1992-01-01
Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.
Numerical Optimization Using Computer Experiments
NASA Technical Reports Server (NTRS)
Trosset, Michael W.; Torczon, Virginia
1997-01-01
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
Aerospace Applications of Integer and Combinatorial Optimization
NASA Technical Reports Server (NTRS)
Padula, S. L.; Kincaid, R. K.
1995-01-01
Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.
Aerospace applications on integer and combinatorial optimization
NASA Technical Reports Server (NTRS)
Padula, S. L.; Kincaid, R. K.
1995-01-01
Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem. for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.
A robust optimization methodology for preliminary aircraft design
NASA Astrophysics Data System (ADS)
Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.
2016-05-01
This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.
Structural topology optimization with fuzzy constraint
NASA Astrophysics Data System (ADS)
Rosko, Peter
2011-12-01
The paper deals with the structural topology optimization with fuzzy constraint. The optimal topology of structure is defined as a material distribution problem. The objective is the weight of the structure. The multifrequency dynamic loading is considered. The optimal topology design of the structure has to eliminate the danger of the resonance vibration. The uncertainty of the loading is defined with help of fuzzy loading. Special fuzzy constraint is created from exciting frequencies. Presented study is applicable in engineering and civil engineering. Example demonstrates the presented theory.
Receding horizon online optimization for torque control of gasoline engines.
Kang, Mingxin; Shen, Tielong
2016-11-01
This paper proposes a model-based nonlinear receding horizon optimal control scheme for the engine torque tracking problem. The controller design directly employs the nonlinear model exploited based on mean-value modeling principle of engine systems without any linearizing reformation, and the online optimization is achieved by applying the Continuation/GMRES (generalized minimum residual) approach. Several receding horizon control schemes are designed to investigate the effects of the integral action and integral gain selection. Simulation analyses and experimental validations are implemented to demonstrate the real-time optimization performance and control effects of the proposed torque tracking controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Patnaik, Surya N.
2000-01-01
A preliminary aircraft engine design methodology is being developed that utilizes a cascade optimization strategy together with neural network and regression approximation methods. The cascade strategy employs different optimization algorithms in a specified sequence. The neural network and regression methods are used to approximate solutions obtained from the NASA Engine Performance Program (NEPP), which implements engine thermodynamic cycle and performance analysis models. The new methodology is proving to be more robust and computationally efficient than the conventional optimization approach of using a single optimization algorithm with direct reanalysis. The methodology has been demonstrated on a preliminary design problem for a novel subsonic turbofan engine concept that incorporates a wave rotor as a cycle-topping device. Computations of maximum thrust were obtained for a specific design point in the engine mission profile. The results (depicted in the figure) show a significant improvement in the maximum thrust obtained using the new methodology in comparison to benchmark solutions obtained using NEPP in a manual design mode.
2009-09-01
SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-10-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z
2015-06-15
Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less
NASA Astrophysics Data System (ADS)
Natarajan, S.; Pitchandi, K.; Mahalakshmi, N. V.
2018-02-01
The performance and emission characteristics of a PPCCI engine fuelled with ethanol and diesel blends were carried out on a single cylinder air cooled CI engine. In order to achieve the optimal process response with a limited number of experimental cycles, multi objective grey relational analysis had been applied for solving a multiple response optimization problem. Using grey relational grade and signal-to-noise ratio as a performance index, a combination of input parameters was prefigured so as to achieve optimum response characteristics. It was observed that 20% premixed ratio of blend was most suitable for use in a PPCCI engine without significantly affecting the engine performance and emissions characteristics.
Upper Limits for Power Yield in Thermal, Chemical, and Electrochemical Systems
NASA Astrophysics Data System (ADS)
Sieniutycz, Stanislaw
2010-03-01
We consider modeling and power optimization of energy converters, such as thermal, solar and chemical engines and fuel cells. Thermodynamic principles lead to expressions for converter's efficiency and generated power. Efficiency equations serve to solve the problems of upgrading or downgrading a resource. Power yield is a cumulative effect in a system consisting of a resource, engines, and an infinite bath. While optimization of steady state systems requires using the differential calculus and Lagrange multipliers, dynamic optimization involves variational calculus and dynamic programming. The primary result of static optimization is the upper limit of power, whereas that of dynamic optimization is a finite-rate counterpart of classical reversible work (exergy). The latter quantity depends on the end state coordinates and a dissipation index, h, which is the Hamiltonian of the problem of minimum entropy production. In reacting systems, an active part of chemical affinity constitutes a major component of the overall efficiency. The theory is also applied to fuel cells regarded as electrochemical flow engines. Enhanced bounds on power yield follow, which are stronger than those predicted by the reversible work potential.
Nash equilibrium and multi criterion aerodynamic optimization
NASA Astrophysics Data System (ADS)
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
Structural optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.
1983-01-01
A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
An Industrial Engineering Approach to Cost Containment of Pharmacy Education.
Duncan, Wendy; Bottenberg, Michelle; Chase, Marilea; Chesnut, Renae; Clarke, Cheryl; Schott, Kathryn; Torry, Ronald; Welty, Tim
2015-11-25
A 2-semester project explored employing teams of fourth-year industrial engineering students to optimize some of our academic management processes. Results included significant cost savings and increases in efficiency, effectiveness, and student and faculty satisfaction. While we did not adopt all of the students' recommendations, we did learn some important lessons. For example, an initial investment of time in developing a mutually clear understanding of the problems, constraints, and goals maximizes the value of industrial engineering analysis and recommendations. Overall, industrial engineering was a valuable tool for optimizing certain academic management processes.
Integrated thermal and energy management of plug-in hybrid electric vehicles
NASA Astrophysics Data System (ADS)
Shams-Zahraei, Mojtaba; Kouzani, Abbas Z.; Kutter, Steffen; Bäker, Bernard
2012-10-01
In plug-in hybrid electric vehicles (PHEVs), the engine temperature declines due to reduced engine load and extended engine off period. It is proven that the engine efficiency and emissions depend on the engine temperature. Also, temperature influences the vehicle air-conditioner and the cabin heater loads. Particularly, while the engine is cold, the power demand of the cabin heater needs to be provided by the batteries instead of the waste heat of engine coolant. The existing energy management strategies (EMS) of PHEVs focus on the improvement of fuel efficiency based on hot engine characteristics neglecting the effect of temperature on the engine performance and the vehicle power demand. This paper presents a new EMS incorporating an engine thermal management method which derives the global optimal battery charge depletion trajectories. A dynamic programming-based algorithm is developed to enforce the charge depletion boundaries, while optimizing a fuel consumption cost function by controlling the engine power. The optimal control problem formulates the cost function based on two state variables: battery charge and engine internal temperature. Simulation results demonstrate that temperature and the cabin heater/air-conditioner power demand can significantly influence the optimal solution for the EMS, and accordingly fuel efficiency and emissions of PHEVs.
Machine Learning Techniques in Optimal Design
NASA Technical Reports Server (NTRS)
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.
Formal and heuristic system decomposition methods in multidisciplinary synthesis. Ph.D. Thesis, 1991
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.
1991-01-01
The multidisciplinary interactions which exist in large scale engineering design problems provide a unique set of difficulties. These difficulties are associated primarily with unwieldy numbers of design variables and constraints, and with the interdependencies of the discipline analysis modules. Such obstacles require design techniques which account for the inherent disciplinary couplings in the analyses and optimizations. The objective of this work was to develop an efficient holistic design synthesis methodology that takes advantage of the synergistic nature of integrated design. A general decomposition approach for optimization of large engineering systems is presented. The method is particularly applicable for multidisciplinary design problems which are characterized by closely coupled interactions among discipline analyses. The advantage of subsystem modularity allows for implementation of specialized methods for analysis and optimization, computational efficiency, and the ability to incorporate human intervention and decision making in the form of an expert systems capability. The resulting approach is not a method applicable to only a specific situation, but rather, a methodology which can be used for a large class of engineering design problems in which the system is non-hierarchic in nature.
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen
2018-01-01
The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635
Spreter Von Kreudenstein, Thomas; Lario, Paula I; Dixit, Surjit B
2014-01-01
Computational and structure guided methods can make significant contributions to the development of solutions for difficult protein engineering problems, including the optimization of next generation of engineered antibodies. In this paper, we describe a contemporary industrial antibody engineering program, based on hypothesis-driven in silico protein optimization method. The foundational concepts and methods of computational protein engineering are discussed, and an example of a computational modeling and structure-guided protein engineering workflow is provided for the design of best-in-class heterodimeric Fc with high purity and favorable biophysical properties. We present the engineering rationale as well as structural and functional characterization data on these engineered designs. Copyright © 2013 Elsevier Inc. All rights reserved.
Data Mining and Optimization Tools for Developing Engine Parameters Tools
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.
1998-01-01
This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. Tricia Erhardt and I studied the problem domain for developing an Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy, datasets. From the study and discussion with NASA LeRC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of tile data for GA based multi-resolution optimal search.
Interdisciplinary optimum design. [of aerospace structures
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Haftka, Raphael T.
1986-01-01
Problems related to interdisciplinary interactions in the design of a complex engineering systems are examined with reference to aerospace applications. The interdisciplinary optimization problems examined include those dealing with controls and structures, materials and structures, control and stability, structure and aerodynamics, and structure and thermodynamics. The discussion is illustrated by the following specific applications: integrated aerodynamic/structural optimization of glider wing; optimization of an antenna parabolic dish structure for minimum weight and prescribed emitted signal gain; and a multilevel optimization study of a transport aircraft.
Design Optimization Toolkit: Users' Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less
Parametric optimal control of uncertain systems under an optimistic value criterion
NASA Astrophysics Data System (ADS)
Li, Bo; Zhu, Yuanguo
2018-01-01
It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
Integrated identification, modeling and control with applications
NASA Astrophysics Data System (ADS)
Shi, Guojun
This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing controller such that the active control energy is minimized. A weighted q-Markov COVER method is introduced for identification with measurement noise. The result is use to develop an iterative closed loop identification/control design algorithm. The effectiveness of the algorithm is illustrated by experimental results.
NASA Technical Reports Server (NTRS)
Orme, John S.
1995-01-01
The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.
NASA Astrophysics Data System (ADS)
Rajora, M.; Zou, P.; Xu, W.; Jin, L.; Chen, W.; Liang, S. Y.
2017-12-01
With the rapidly changing demands of the manufacturing market, intelligent techniques are being used to solve engineering problems due to their ability to handle nonlinear complex problems. For example, in the conventional production of stator cores, it is relied upon experienced engineers to make an initial plan on the number of compensation sheets to be added to achieve uniform pressure distribution throughout the laminations. Additionally, these engineers must use their experience to revise the initial plans based upon the measurements made during the production of stator core. However, this method yields inconsistent results as humans are incapable of storing and analysing large amounts of data. In this article, first, a Neural Network (NN), trained using a hybrid Levenberg-Marquardt (LM) - Genetic Algorithm (GA), is developed to assist the engineers with the decision-making process. Next, the trained NN is used as a fitness function in an optimization algorithm to find the optimal values of the initial compensation sheet plan with the aim of minimizing the required revisions during the production of the stator core.
Least Squares Computations in Science and Engineering
1994-02-01
iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
An Industrial Engineering Approach to Cost Containment of Pharmacy Education
Bottenberg, Michelle; Chase, Marilea; Chesnut, Renae; Clarke, Cheryl; Schott, Kathryn; Torry, Ronald; Welty, Tim
2015-01-01
A 2-semester project explored employing teams of fourth-year industrial engineering students to optimize some of our academic management processes. Results included significant cost savings and increases in efficiency, effectiveness, and student and faculty satisfaction. While we did not adopt all of the students’ recommendations, we did learn some important lessons. For example, an initial investment of time in developing a mutually clear understanding of the problems, constraints, and goals maximizes the value of industrial engineering analysis and recommendations. Overall, industrial engineering was a valuable tool for optimizing certain academic management processes. PMID:26839421
Applications of fuzzy theories to multi-objective system optimization
NASA Technical Reports Server (NTRS)
Rao, S. S.; Dhingra, A. K.
1991-01-01
Most of the computer aided design techniques developed so far deal with the optimization of a single objective function over the feasible design space. However, there often exist several engineering design problems which require a simultaneous consideration of several objective functions. This work presents several techniques of multiobjective optimization. In addition, a new formulation, based on fuzzy theories, is also introduced for the solution of multiobjective system optimization problems. The fuzzy formulation is useful in dealing with systems which are described imprecisely using fuzzy terms such as, 'sufficiently large', 'very strong', or 'satisfactory'. The proposed theory translates the imprecise linguistic statements and multiple objectives into equivalent crisp mathematical statements using fuzzy logic. The effectiveness of all the methodologies and theories presented is illustrated by formulating and solving two different engineering design problems. The first one involves the flight trajectory optimization and the main rotor design of helicopters. The second one is concerned with the integrated kinematic-dynamic synthesis of planar mechanisms. The use and effectiveness of nonlinear membership functions in fuzzy formulation is also demonstrated. The numerical results indicate that the fuzzy formulation could yield results which are qualitatively different from those provided by the crisp formulation. It is felt that the fuzzy formulation will handle real life design problems on a more rational basis.
Deformation effect simulation and optimization for double front axle steering mechanism
NASA Astrophysics Data System (ADS)
Wu, Jungang; Zhang, Siqin; Yang, Qinglong
2013-03-01
This paper research on tire wear problem of heavy vehicles with Double Front Axle Steering Mechanism from the flexible effect of Steering Mechanism, and proposes a structural optimization method which use both traditional static structural theory and dynamic structure theory - Equivalent Static Load (ESL) method to optimize key parts. The good simulated and test results show this method has high engineering practice and reference value for tire wear problem of Double Front Axle Steering Mechanism design.
Dynamic optimization of chemical processes using ant colony framework.
Rajesh, J; Gupta, K; Kusumakar, H S; Jayaraman, V K; Kulkarni, B D
2001-11-01
Ant colony framework is illustrated by considering dynamic optimization of six important bench marking examples. This new computational tool is simple to implement and can tackle problems with state as well as terminal constraints in a straightforward fashion. It requires fewer grid points to reach the global optimum at relatively very low computational effort. The examples with varying degree of complexities, analyzed here, illustrate its potential for solving a large class of process optimization problems in chemical engineering.
CrossTalk, The Journal of Defense Software Engineering. Volume 27, Number 3. May/June 2014
2014-06-01
field of software engineering. by Delores M. Etter, Jennifer Webb, and John Howard The Problem of Prolific Process What is the optimal amount and...Programming Will Never Be Obsolete The creativity of software developers will always be needed to solve problems of the future and to then translate those...utilized to address some of the complex problems associated with biometric database construction. 1. A Next Generation Multispectral Iris Biometric
Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization
NASA Astrophysics Data System (ADS)
Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane
2003-01-01
The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.
Low-thrust trajectory optimization in a full ephemeris model
NASA Astrophysics Data System (ADS)
Cai, Xing-Shan; Chen, Yang; Li, Jun-Feng
2014-10-01
The low-thrust trajectory optimization with complicated constraints must be considered in practical engineering. In most literature, this problem is simplified into a two-body model in which the spacecraft is subject to the gravitational force at the center of mass and the spacecraft's own electric propulsion only, and the gravity assist (GA) is modeled as an instantaneous velocity increment. This paper presents a method to solve the fuel-optimal problem of low-thrust trajectory with complicated constraints in a full ephemeris model, which is closer to practical engineering conditions. First, it introduces various perturbations, including a third body's gravity, the nonspherical perturbation and the solar radiation pressure in a dynamic equation. Second, it builds two types of equivalent inner constraints to describe the GA. At the same time, the present paper applies a series of techniques, such as a homotopic approach, to enhance the possibility of convergence of the global optimal solution.
High Level Rule Modeling Language for Airline Crew Pairing
NASA Astrophysics Data System (ADS)
Mutlu, Erdal; Birbil, Ş. Ilker; Bülbül, Kerem; Yenigün, Hüsnü
2011-09-01
The crew pairing problem is an airline optimization problem where a set of least costly pairings (consecutive flights to be flown by a single crew) that covers every flight in a given flight network is sought. A pairing is defined by using a very complex set of feasibility rules imposed by international and national regulatory agencies, and also by the airline itself. The cost of a pairing is also defined by using complicated rules. When an optimization engine generates a sequence of flights from a given flight network, it has to check all these feasibility rules to ensure whether the sequence forms a valid pairing. Likewise, the engine needs to calculate the cost of the pairing by using certain rules. However, the rules used for checking the feasibility and calculating the costs are usually not static. Furthermore, the airline companies carry out what-if-type analyses through testing several alternate scenarios in each planning period. Therefore, embedding the implementation of feasibility checking and cost calculation rules into the source code of the optimization engine is not a practical approach. In this work, a high level language called ARUS is introduced for describing the feasibility and cost calculation rules. A compiler for ARUS is also implemented in this work to generate a dynamic link library to be used by crew pairing optimization engines.
Robust Neighboring Optimal Guidance for the Advanced Launch System
NASA Technical Reports Server (NTRS)
Hull, David G.
1993-01-01
In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.
University Policies under Varying Market Conditions: The Training of Electrical Engineers.
ERIC Educational Resources Information Center
Eckstein, Zvi; And Others
1988-01-01
Analyzes an Israeli university's problem in optimizing the quality and quantity of electrical engineers in response to fluctuating enrollment. An equilibrium model considers the effect of students' occupation choice and the university's decision on the current and future demand and supply of engineers, in order to predict the equilibrium number of…
Adaptive building skin structures
NASA Astrophysics Data System (ADS)
Del Grosso, A. E.; Basso, P.
2010-12-01
The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.
Optimization Strategies for Sensor and Actuator Placement
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Kincaid, Rex K.
1999-01-01
This paper provides a survey of actuator and sensor placement problems from a wide range of engineering disciplines and a variety of applications. Combinatorial optimization methods are recommended as a means for identifying sets of actuators and sensors that maximize performance. Several sample applications from NASA Langley Research Center, such as active structural acoustic control, are covered in detail. Laboratory and flight tests of these applications indicate that actuator and sensor placement methods are effective and important. Lessons learned in solving these optimization problems can guide future research.
Variational Trajectory Optimization Tool Set: Technical description and user's manual
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.
1993-01-01
The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.
Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur
2017-01-01
Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.
NASA Astrophysics Data System (ADS)
Ma, Lin; Wang, Kexin; Xu, Zuhua; Shao, Zhijiang; Song, Zhengyu; Biegler, Lorenz T.
2018-05-01
This study presents a trajectory optimization framework for lunar rover performing vertical takeoff vertical landing (VTVL) maneuvers in the presence of terrain using variable-thrust propulsion. First, a VTVL trajectory optimization problem with three-dimensional kinematics and dynamics model, boundary conditions, and path constraints is formulated. Then, a finite-element approach transcribes the formulated trajectory optimization problem into a nonlinear programming (NLP) problem solved by a highly efficient NLP solver. A homotopy-based backtracking strategy is applied to enhance the convergence in solving the formulated VTVL trajectory optimization problem. The optimal thrust solution typically has a "bang-bang" profile considering that bounds are imposed on the magnitude of engine thrust. An adaptive mesh refinement strategy based on a constant Hamiltonian profile is designed to address the difficulty in locating the breakpoints in the thrust profile. Four scenarios are simulated. Simulation results indicate that the proposed trajectory optimization framework has sufficient adaptability to handle VTVL missions efficiently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The Second SIAM Conference on Computational Science and Engineering was held in San Diego from February 10-12, 2003. Total conference attendance was 553. This is a 23% increase in attendance over the first conference. The focus of this conference was to draw attention to the tremendous range of major computational efforts on large problems in science and engineering, to promote the interdisciplinary culture required to meet these large-scale challenges, and to encourage the training of the next generation of computational scientists. Computational Science & Engineering (CS&E) is now widely accepted, along with theory and experiment, as a crucial third modemore » of scientific investigation and engineering design. Aerospace, automotive, biological, chemical, semiconductor, and other industrial sectors now rely on simulation for technical decision support. For federal agencies also, CS&E has become an essential support for decisions on resources, transportation, and defense. CS&E is, by nature, interdisciplinary. It grows out of physical applications and it depends on computer architecture, but at its heart are powerful numerical algorithms and sophisticated computer science techniques. From an applied mathematics perspective, much of CS&E has involved analysis, but the future surely includes optimization and design, especially in the presence of uncertainty. Another mathematical frontier is the assimilation of very large data sets through such techniques as adaptive multi-resolution, automated feature search, and low-dimensional parameterization. The themes of the 2003 conference included, but were not limited to: Advanced Discretization Methods; Computational Biology and Bioinformatics; Computational Chemistry and Chemical Engineering; Computational Earth and Atmospheric Sciences; Computational Electromagnetics; Computational Fluid Dynamics; Computational Medicine and Bioengineering; Computational Physics and Astrophysics; Computational Solid Mechanics and Materials; CS&E Education; Meshing and Adaptivity; Multiscale and Multiphysics Problems; Numerical Algorithms for CS&E; Discrete and Combinatorial Algorithms for CS&E; Inverse Problems; Optimal Design, Optimal Control, and Inverse Problems; Parallel and Distributed Computing; Problem-Solving Environments; Software and Wddleware Systems; Uncertainty Estimation and Sensitivity Analysis; and Visualization and Computer Graphics.« less
Optimal design and installation of ultra high bypass ratio turbofan nacelle
NASA Astrophysics Data System (ADS)
Savelyev, Andrey; Zlenko, Nikolay; Matyash, Evgeniy; Mikhaylov, Sergey; Shenkin, Andrey
2016-10-01
The paper is devoted to the problem of designing and optimizing the nacelle of turbojet bypass engine with high bypass ratio and high thrust. An optimization algorithm EGO based on development of surrogate models and the method for maximizing the probability of improving the objective function has been used. The designing methodology has been based on the numerical solution of the Reynolds equations system. Spalart-Allmaras turbulence model has been chosen for RANS closure. The effective thrust losses has been uses as an objective function in optimizing the engine nacelle. As a result of optimization, effective thrust has been increased by 1.5 %. The Blended wing body aircraft configuration has been studied as a possible application. Two variants of the engine layout arrangement have been considered. It has been shown that the power plant changes the pressure distribution on the aircraft surface. It results in essential diminishing the configuration lift-drag ratio.
Optimal trajectories for an aerospace plane. Part 1: Formulation, results, and analysis
NASA Technical Reports Server (NTRS)
Miele, Angelo; Lee, W. Y.; Wu, G. D.
1990-01-01
The optimization of the trajectories of an aerospace plane is discussed. This is a hypervelocity vehicle capable of achieving orbital speed, while taking off horizontally. The vehicle is propelled by four types of engines: turbojet engines for flight at subsonic speeds/low supersonic speeds; ramjet engines for flight at moderate supersonic speeds/low hypersonic speeds; scramjet engines for flight at hypersonic speeds; and rocket engines for flight at near-orbital speeds. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied under the following assumptions: the turbojet portion of the trajectory has been completed; the aerospace plane is controlled via the angle of attack and the power setting; the aerodynamic model is the generic hypersonic aerodynamics model example (GHAME). Concerning the engine model, three options are considered: (EM1), a ramjet/scramjet combination in which the scramjet specific impulse tends to a nearly-constant value at large Mach numbers; (EM2), a ramjet/scramjet combination in which the scramjet specific impulse decreases monotonically at large Mach numbers; and (EM3), a ramjet/scramjet/rocket combination in which, owing to stagnation temperature limitations, the scramjet operates only at M approx. less than 15; at higher Mach numbers, the scramjet is shut off and the aerospace plane is driven only by the rocket engines. Under the above assumptions, four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (P1) minimization of the weight of fuel consumed; (P2) minimization of the peak dynamic pressure; (P3) minimization of the peak heating rate; and (P4) minimization of the peak tangential acceleration.
Analytical and Computational Properties of Distributed Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Integrated design optimization research and development in an industrial environment
NASA Astrophysics Data System (ADS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-04-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations
Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.
2015-01-01
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279
Integrated design optimization research and development in an industrial environment
NASA Technical Reports Server (NTRS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-01-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.
Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C
2016-02-15
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.
Du, Shouqiang; Chen, Miao
2018-01-01
We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.
ERIC Educational Resources Information Center
Nikelshpur, Dmitry O.
2014-01-01
Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. "ANNs have a tendency to get…
A nonlinear bi-level programming approach for product portfolio management.
Ma, Shuang
2016-01-01
Product portfolio management (PPM) is a critical decision-making for companies across various industries in today's competitive environment. Traditional studies on PPM problem have been motivated toward engineering feasibilities and marketing which relatively pay less attention to other competitors' actions and the competitive relations, especially in mathematical optimization domain. The key challenge lies in that how to construct a mathematical optimization model to describe this Stackelberg game-based leader-follower PPM problem and the competitive relations between them. The primary work of this paper is the representation of a decision framework and the optimization model to leverage the PPM problem of leader and follower. A nonlinear, integer bi-level programming model is developed based on the decision framework. Furthermore, a bi-level nested genetic algorithm is put forward to solve this nonlinear bi-level programming model for leader-follower PPM problem. A case study of notebook computer product portfolio optimization is reported. Results and analyses reveal that the leader-follower bi-level optimization model is robust and can empower product portfolio optimization.
Reverse engineering and identification in systems biology: strategies, perspectives and challenges.
Villaverde, Alejandro F; Banga, Julio R
2014-02-06
The interplay of mathematical modelling with experiments is one of the central elements in systems biology. The aim of reverse engineering is to infer, analyse and understand, through this interplay, the functional and regulatory mechanisms of biological systems. Reverse engineering is not exclusive of systems biology and has been studied in different areas, such as inverse problem theory, machine learning, nonlinear physics, (bio)chemical kinetics, control theory and optimization, among others. However, it seems that many of these areas have been relatively closed to outsiders. In this contribution, we aim to compare and highlight the different perspectives and contributions from these fields, with emphasis on two key questions: (i) why are reverse engineering problems so hard to solve, and (ii) what methods are available for the particular problems arising from systems biology?
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
NASA Astrophysics Data System (ADS)
Guo, Weian; Li, Wuzhao; Zhang, Qun; Wang, Lei; Wu, Qidi; Ren, Hongliang
2014-11-01
In evolutionary algorithms, elites are crucial to maintain good features in solutions. However, too many elites can make the evolutionary process stagnate and cannot enhance the performance. This article employs particle swarm optimization (PSO) and biogeography-based optimization (BBO) to propose a hybrid algorithm termed biogeography-based particle swarm optimization (BPSO) which could make a large number of elites effective in searching optima. In this algorithm, the whole population is split into several subgroups; BBO is employed to search within each subgroup and PSO for the global search. Since not all the population is used in PSO, this structure overcomes the premature convergence in the original PSO. Time complexity analysis shows that the novel algorithm does not increase the time consumption. Fourteen numerical benchmarks and four engineering problems with constraints are used to test the BPSO. To better deal with constraints, a fuzzy strategy for the number of elites is investigated. The simulation results validate the feasibility and effectiveness of the proposed algorithm.
Nguyen, A; Yosinski, J; Clune, J
2016-01-01
The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.
NASA Astrophysics Data System (ADS)
Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong
2014-03-01
A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.
Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools
NASA Technical Reports Server (NTRS)
Orr, Stanley A.; Narducci, Robert P.
2009-01-01
A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.
First-Order Frameworks for Managing Models in Engineering Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natlia M.; Lewis, Robert Michael
2000-01-01
Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.
Simulator for multilevel optimization research
NASA Technical Reports Server (NTRS)
Padula, S. L.; Young, K. C.
1986-01-01
A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.
Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.
2017-01-01
Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442
Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R
2017-01-01
We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.
Interior search algorithm (ISA): a novel approach for global optimization.
Gandomi, Amir H
2014-07-01
This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers
NASA Technical Reports Server (NTRS)
Lind, Rick; Balas, Gary J.
1995-01-01
This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.
Programmed Evolution for Optimization of Orthogonal Metabolic Output in Bacteria
Eckdahl, Todd T.; Campbell, A. Malcolm; Heyer, Laurie J.; Poet, Jeffrey L.; Blauch, David N.; Snyder, Nicole L.; Atchley, Dustin T.; Baker, Erich J.; Brown, Micah; Brunner, Elizabeth C.; Callen, Sean A.; Campbell, Jesse S.; Carr, Caleb J.; Carr, David R.; Chadinha, Spencer A.; Chester, Grace I.; Chester, Josh; Clarkson, Ben R.; Cochran, Kelly E.; Doherty, Shannon E.; Doyle, Catherine; Dwyer, Sarah; Edlin, Linnea M.; Evans, Rebecca A.; Fluharty, Taylor; Frederick, Janna; Galeota-Sprung, Jonah; Gammon, Betsy L.; Grieshaber, Brandon; Gronniger, Jessica; Gutteridge, Katelyn; Henningsen, Joel; Isom, Bradley; Itell, Hannah L.; Keffeler, Erica C.; Lantz, Andrew J.; Lim, Jonathan N.; McGuire, Erin P.; Moore, Alexander K.; Morton, Jerrad; Nakano, Meredith; Pearson, Sara A.; Perkins, Virginia; Parrish, Phoebe; Pierson, Claire E.; Polpityaarachchige, Sachith; Quaney, Michael J.; Slattery, Abagael; Smith, Kathryn E.; Spell, Jackson; Spencer, Morgan; Taye, Telavive; Trueblood, Kamay; Vrana, Caroline J.; Whitesides, E. Tucker
2015-01-01
Current use of microbes for metabolic engineering suffers from loss of metabolic output due to natural selection. Rather than combat the evolution of bacterial populations, we chose to embrace what makes biological engineering unique among engineering fields – evolving materials. We harnessed bacteria to compute solutions to the biological problem of metabolic pathway optimization. Our approach is called Programmed Evolution to capture two concepts. First, a population of cells is programmed with DNA code to enable it to compute solutions to a chosen optimization problem. As analog computers, bacteria process known and unknown inputs and direct the output of their biochemical hardware. Second, the system employs the evolution of bacteria toward an optimal metabolic solution by imposing fitness defined by metabolic output. The current study is a proof-of-concept for Programmed Evolution applied to the optimization of a metabolic pathway for the conversion of caffeine to theophylline in E. coli. Introduced genotype variations included strength of the promoter and ribosome binding site, plasmid copy number, and chaperone proteins. We constructed 24 strains using all combinations of the genetic variables. We used a theophylline riboswitch and a tetracycline resistance gene to link theophylline production to fitness. After subjecting the mixed population to selection, we measured a change in the distribution of genotypes in the population and an increased conversion of caffeine to theophylline among the most fit strains, demonstrating Programmed Evolution. Programmed Evolution inverts the standard paradigm in metabolic engineering by harnessing evolution instead of fighting it. Our modular system enables researchers to program bacteria and use evolution to determine the combination of genetic control elements that optimizes catabolic or anabolic output and to maintain it in a population of cells. Programmed Evolution could be used for applications in energy, pharmaceuticals, chemical commodities, biomining, and bioremediation. PMID:25714374
Programmed evolution for optimization of orthogonal metabolic output in bacteria.
Eckdahl, Todd T; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Blauch, David N; Snyder, Nicole L; Atchley, Dustin T; Baker, Erich J; Brown, Micah; Brunner, Elizabeth C; Callen, Sean A; Campbell, Jesse S; Carr, Caleb J; Carr, David R; Chadinha, Spencer A; Chester, Grace I; Chester, Josh; Clarkson, Ben R; Cochran, Kelly E; Doherty, Shannon E; Doyle, Catherine; Dwyer, Sarah; Edlin, Linnea M; Evans, Rebecca A; Fluharty, Taylor; Frederick, Janna; Galeota-Sprung, Jonah; Gammon, Betsy L; Grieshaber, Brandon; Gronniger, Jessica; Gutteridge, Katelyn; Henningsen, Joel; Isom, Bradley; Itell, Hannah L; Keffeler, Erica C; Lantz, Andrew J; Lim, Jonathan N; McGuire, Erin P; Moore, Alexander K; Morton, Jerrad; Nakano, Meredith; Pearson, Sara A; Perkins, Virginia; Parrish, Phoebe; Pierson, Claire E; Polpityaarachchige, Sachith; Quaney, Michael J; Slattery, Abagael; Smith, Kathryn E; Spell, Jackson; Spencer, Morgan; Taye, Telavive; Trueblood, Kamay; Vrana, Caroline J; Whitesides, E Tucker
2015-01-01
Current use of microbes for metabolic engineering suffers from loss of metabolic output due to natural selection. Rather than combat the evolution of bacterial populations, we chose to embrace what makes biological engineering unique among engineering fields - evolving materials. We harnessed bacteria to compute solutions to the biological problem of metabolic pathway optimization. Our approach is called Programmed Evolution to capture two concepts. First, a population of cells is programmed with DNA code to enable it to compute solutions to a chosen optimization problem. As analog computers, bacteria process known and unknown inputs and direct the output of their biochemical hardware. Second, the system employs the evolution of bacteria toward an optimal metabolic solution by imposing fitness defined by metabolic output. The current study is a proof-of-concept for Programmed Evolution applied to the optimization of a metabolic pathway for the conversion of caffeine to theophylline in E. coli. Introduced genotype variations included strength of the promoter and ribosome binding site, plasmid copy number, and chaperone proteins. We constructed 24 strains using all combinations of the genetic variables. We used a theophylline riboswitch and a tetracycline resistance gene to link theophylline production to fitness. After subjecting the mixed population to selection, we measured a change in the distribution of genotypes in the population and an increased conversion of caffeine to theophylline among the most fit strains, demonstrating Programmed Evolution. Programmed Evolution inverts the standard paradigm in metabolic engineering by harnessing evolution instead of fighting it. Our modular system enables researchers to program bacteria and use evolution to determine the combination of genetic control elements that optimizes catabolic or anabolic output and to maintain it in a population of cells. Programmed Evolution could be used for applications in energy, pharmaceuticals, chemical commodities, biomining, and bioremediation.
An approach to optimal semi-active control of vibration energy harvesting based on MEMS
NASA Astrophysics Data System (ADS)
Rojas, Rafael A.; Carcaterra, Antonio
2018-07-01
In this paper the energy harvesting problem involving typical MEMS technology is reduced to an optimal control problem, where the objective function is the absorption of the maximum amount of energy in a given time interval from a vibrating environment. The interest here is to identify a physical upper bound for this energy storage. The mathematical tool is a new optimal control called Krotov's method, that has not yet been applied to engineering problems, except in quantum dynamics. This approach leads to identify new maximum bounds to the energy harvesting performance. Novel MEMS-based device control configurations for vibration energy harvesting are proposed with particular emphasis to piezoelectric, electromagnetic and capacitive circuits.
Efficient computation of optimal actions.
Todorov, Emanuel
2009-07-14
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.
Optimal landing of a helicopter in autorotation
NASA Technical Reports Server (NTRS)
Lee, A. Y. N.
1985-01-01
Gliding descent in autorotation is a maneuver used by helicopter pilots in case of engine failure. The landing of a helicopter in autorotation is formulated as a nonlinear optimal control problem. The OH-58A helicopter was used. Helicopter vertical and horizontal velocities, vertical and horizontal displacement, and the rotor angle speed were modeled. An empirical approximation for the induced veloctiy in the vortex-ring state were provided. The cost function of the optimal control problem is a weighted sum of the squared horizontal and vertical components of the helicopter velocity at touchdown. Optimal trajectories are calculated for entry conditions well within the horizontal-vertical restriction curve, with the helicopter initially in hover or forwared flight. The resultant two-point boundary value problem with path equality constraints was successfully solved using the Sequential Gradient Restoration Technique.
NASA Technical Reports Server (NTRS)
2001-01-01
Analytical Mechanics Associates, Inc. (AMA), of Hampton, Virginia, created the EZopt software application through Small Business Innovation Research (SBIR) funding from NASA's Langley Research Center. The new software is a user-friendly tool kit that provides quick and logical solutions to complex optimal control problems. In its most basic form, EZopt converts process data into math equations and then proceeds to utilize those equations to solve problems within control systems. EZopt successfully proved its advantage when applied to short-term mission planning and onboard flight computer implementation. The technology has also solved multiple real-life engineering problems faced in numerous commercial operations. For instance, mechanical engineers use EZopt to solve control problems with robots, while chemical plants implement the application to overcome situations with batch reactors and temperature control. In the emerging field of commercial aerospace, EZopt is able to optimize trajectories for launch vehicles and perform potential space station- keeping tasks. Furthermore, the software also helps control electromagnetic devices in the automotive industry.
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-05-01
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Reverse engineering and identification in systems biology: strategies, perspectives and challenges
Villaverde, Alejandro F.; Banga, Julio R.
2014-01-01
The interplay of mathematical modelling with experiments is one of the central elements in systems biology. The aim of reverse engineering is to infer, analyse and understand, through this interplay, the functional and regulatory mechanisms of biological systems. Reverse engineering is not exclusive of systems biology and has been studied in different areas, such as inverse problem theory, machine learning, nonlinear physics, (bio)chemical kinetics, control theory and optimization, among others. However, it seems that many of these areas have been relatively closed to outsiders. In this contribution, we aim to compare and highlight the different perspectives and contributions from these fields, with emphasis on two key questions: (i) why are reverse engineering problems so hard to solve, and (ii) what methods are available for the particular problems arising from systems biology? PMID:24307566
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
Statistical mechanics of budget-constrained auctions
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.
2009-07-01
Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.
Minimum fuel trajectory for the aerospace-plane
NASA Technical Reports Server (NTRS)
Breakwell, John V.; Golan, Oded; Sauvageot, Anne
1990-01-01
An overall trajectory for a single-stage-to-orbit vehicle with an initial weight of 234 tons is calculated, and four different propulsion models including turbojet, ramjet, scramjet, and rocket are considered. First, the atmospheric flight in the thicker atmosphere is discussed with emphasis on trajectory optimization, optimization problem, aerodynamic problem, propulsion model, and initial conditions. The performance of turbojet and ramjet-scramjet engines is analyzed; and then the flight to orbit is assessed from the optimization point of view. It is shown that roll modulation saves little during the trajectory, and the combined application of airbreathing propulsion and aerodynamic lift is suggested.
ON THE PROBLEM OF CORRECTING TWISTED TURBINE BLADES,
TURBINE BLADES , DESIGN), GAS TURBINES , STEAM TURBINES , BLADE AIRFOILS , ASPECT RATIO, FLUID DYNAMICS, SECONDARY FLOW, ANGLE OF ATTACK, INLET GUIDE VANES , CORRECTIONS, PERFORMANCE( ENGINEERING ), OPTIMIZATION, USSR
Automation of reverse engineering process in aircraft modeling and related optimization problems
NASA Technical Reports Server (NTRS)
Li, W.; Swetits, J.
1994-01-01
During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for quadratic programming problems.
Van Eijk, F; Saris, D B F; Riesle, J; Willems, W J; Van Blitterswijk, C A; Verbout, A J; Dhert, W J A
2004-01-01
Anterior cruciate ligament (ACL) reconstruction surgery still has important problems to overcome, such as "donor site morbidity" and the limited choice of grafts in revision surgery. Tissue engineering of ligaments may provide a solution for these problems. Little is known about the optimal cell source for tissue engineering of ligaments. The aim of this study is to determine the optimal cell source for tissue engineering of the anterior cruciate ligament. Bone marrow stromal cells (BMSCs), ACL, and skin fibroblasts were seeded onto a resorbable suture material [poly(L-lactide/glycolide) multifilaments] at five different seeding densities, and cultured for up to 12 days. All cell types tested attached to the suture material, proliferated, and synthesized extracellular matrix rich in collagen type I. On day 12 the scaffolds seeded with BMSCs showed the highest DNA content (p < 0.01) and the highest collagen production (p < 0.05 for the two highest seeding densities). Scaffolds seeded with ACL fibroblasts showed the lowest DNA content and collagen production. Accordingly, BMSCs appear to be the most suitable cells for further study and development of tissue-engineered ligament.
NASA Astrophysics Data System (ADS)
Nguyen, Q. H.; Choi, S. B.; Lee, Y. S.; Han, M. S.
2013-11-01
This paper focuses on the optimal design of a compact and high damping force engine mount featuring magnetorheological fluid (MRF). In the mount, a MR valve structure with both annular and radial flows is employed to generate a high damping force. First, the configuration and working principle of the proposed MR mount is introduced. The MRF flows in the mount are then analyzed and the governing equations of the MR mount are derived based on the Bingham plastic behavior of the MRF. An optimal design of the MR mount is then performed to find the optimal structure of the MR valve to generate a maximum damping force with certain design constraints. In addition, the gap size of MRF ducts is empirically chosen considering the ‘lockup’ problem of the mount at high frequency. Performance of the optimized MR mount is then evaluated based on finite element analysis and discussions on performance results of the optimized MR mount are given. The effectiveness of the proposed MR engine mount is demonstrated via computer simulation by presenting damping force and power consumption.
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
NASA Astrophysics Data System (ADS)
Wang, Zheng; Wang, Zengquan; Wang, A.-na; Zhuang, Li; Wang, Jinwei
2016-10-01
As turbocharging diesel engines for vehicle application are applied in plateau area, the environmental adaptability of engines has drawn more attention. For the environmental adaptability problem of turbocharging diesel engines for vehicle application, the present studies almost focus on the optimization of performance match between turbocharger and engine, and the reliability problem of turbocharger is almost ignored. The reliability problem of compressor impeller of turbocharger for vehicle application when diesel engines operate in plateau area is studied. Firstly, the rule that the rotational speed of turbocharger changes with the altitude height is presented, and the potential failure modes of compressor impeller are analyzed. Then, the failure behavior models of compressor impeller are built, and the reliability models of compressor impeller operating in plateau area are developed. Finally, the rule that the reliability of compressor impeller changes with the altitude height is studied, the measurements for improving the reliability of the compressor impellers of turbocharger operating in plateau area are given. The results indicate that when the operating speed of diesel engine is certain, the rotational speed of turbocharger increases with the increase of altitude height, and the failure risk of compressor impeller with the failure modes of hub fatigue and blade resonance increases. The reliability of compressor impeller decreases with the increase of altitude height, and it also decreases as the increase of number of the mission profile cycle of engine. The method proposed can not only be used to evaluating the reliability of compressor impeller when diesel engines operate in plateau area but also be applied to direct the structural optimization of compressor impeller.
The role of under-determined approximations in engineering and science application
NASA Technical Reports Server (NTRS)
Carpenter, William C.
1992-01-01
There is currently a great deal of interest in using response surfaces in the optimization of aircraft performance. The objective function and/or constraint equations involved in these optimization problems may come from numerous disciplines such as structures, aerodynamics, environmental engineering, etc. In each of these disciplines, the mathematical complexity of the governing equations usually dictates that numerical results be obtained from large computer programs such as a finite element method program. Thus, when performing optimization studies, response surfaces are a convenient way of transferring information from the various disciplines to the optimization algorithm as opposed to bringing all the sundry computer programs together in a massive computer code. Response surfaces offer another advantage in the optimization of aircraft structures. A characteristic of these types of optimization problems is that evaluation of the objective function and response equations (referred to as a functional evaluation) can be very expensive in a computational sense. Because of the computational expense in obtaining functional evaluations, the present study was undertaken to investigate under-determinined approximations. An under-determined approximation is one in which there are fewer training pairs (pieces of information about a function) than there are undetermined parameters (coefficients or weights) associated with the approximation. Both polynomial approximations and neural net approximations were examined. Three main example problems were investigated: (1) a function of one design variable was considered; (2) a function of two design variables was considered; and (3) a 35 bar truss with 4 design variables was considered.
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Scotti, S. J.
1989-01-01
The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
NASA Astrophysics Data System (ADS)
Vasant, Pandian; Barsoum, Nader
2008-10-01
Many engineering, science, information technology and management optimization problems can be considered as non linear programming real world problems where the all or some of the parameters and variables involved are uncertain in nature. These can only be quantified using intelligent computational techniques such as evolutionary computation and fuzzy logic. The main objective of this research paper is to solve non linear fuzzy optimization problem where the technological coefficient in the constraints involved are fuzzy numbers which was represented by logistic membership functions by using hybrid evolutionary optimization approach. To explore the applicability of the present study a numerical example is considered to determine the production planning for the decision variables and profit of the company.
NASA Technical Reports Server (NTRS)
Arnold, W.; Bowen, S.; Cohen, S.; Fine, K.; Kaplan, D.; Kolm, M.; Kolm, H.; Newman, J.; Oneill, G. K.; Snow, W.
1979-01-01
The last of a series of three papers by the Mass-Driver Group of the 1977 Ames Summer Study is presented. It develops the engineering principles required to implement the basic mass-driver. Optimum component mass trade-offs are derived from a set of four input parameters, and the program used to design a lunar launcher. The mass optimization procedures is then incorporated into a more comprehensive mission optimization program called OPT-4, which evaluates an optimized mass-driver reaction engine and its performance in a range of specified missions. Finally, this paper discusses, to the extent that time permitted, certain peripheral problems: heating effects in buckets due to magnetic field ripple; an approximate derivation of guide force profiles; the mechanics of inserting and releasing payloads; the reaction mass orbits; and a proposed research and development plan for implementing mass drivers.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Engineering tradeoff problems viewed as multiple objective optimizations and the VODCA methodology
NASA Astrophysics Data System (ADS)
Morgan, T. W.; Thurgood, R. L.
1984-05-01
This paper summarizes a rational model for making engineering tradeoff decisions. The model is a hybrid from the fields of social welfare economics, communications, and operations research. A solution methodology (Vector Optimization Decision Convergence Algorithm - VODCA) firmly grounded in the economic model is developed both conceptually and mathematically. The primary objective for developing the VODCA methodology was to improve the process for extracting relative value information about the objectives from the appropriate decision makers. This objective was accomplished by employing data filtering techniques to increase the consistency of the relative value information and decrease the amount of information required. VODCA is applied to a simplified hypothetical tradeoff decision problem. Possible use of multiple objective analysis concepts and the VODCA methodology in product-line development and market research are discussed.
NASA Astrophysics Data System (ADS)
Gruber, Ralph; Periaux, Jaques; Shaw, Richard Paul
Recent advances in computational mechanics are discussed in reviews and reports. Topics addressed include spectral superpositions on finite elements for shear banding problems, strain-based finite plasticity, numerical simulation of hypersonic viscous continuum flow, constitutive laws in solid mechanics, dynamics problems, fracture mechanics and damage tolerance, composite plates and shells, contact and friction, metal forming and solidification, coupling problems, and adaptive FEMs. Consideration is given to chemical flows, convection problems, free boundaries and artificial boundary conditions, domain-decomposition and multigrid methods, combustion and thermal analysis, wave propagation, mixed and hybrid FEMs, integral-equation methods, optimization, software engineering, and vector and parallel computing.
Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2015-04-07
Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.
Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin; Cheng, Runwei
Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Sujono, A.; Santoso, B.; Juwana, W. E.
2016-03-01
Problems of detonation (knock) on Otto engine (petrol engine) is completely unresolved problem until now, especially if want to improve the performance. This research did sound vibration signal processing engine with a microphone sensor, for the detection and identification of detonation. A microphone that can be mounted is not attached to the cylinder block, that's high temperature, so that its performance will be more stable, durable and inexpensive. However, the method of analysis is not very easy, because a lot of noise (interference). Therefore the use of new methods of pattern recognition, through filtration, and the regression function normalized envelope. The result is quite good, can achieve a success rate of about 95%.
Using concepts from biology to improve problem-solving methods
NASA Astrophysics Data System (ADS)
Goodman, Erik D.; Rothwell, Edward J.; Averill, Ronald C.
2011-06-01
Observing nature has been a cornerstone of engineering design. Today, engineers look not only at finished products, but imitate the evolutionary process by which highly optimized artifacts have appeared in nature. Evolutionary computation began by capturing only the simplest ideas of evolution, but today, researchers study natural evolution and incorporate an increasing number of concepts in order to evolve solutions to complex engineering problems. At the new BEACON Center for the Study of Evolution in Action, studies in the lab and field and in silico are laying the groundwork for new tools for evolutionary engineering design. This paper, which accompanies a keynote address, describes various steps in development and application of evolutionary computation, particularly as regards sensor design, and sets the stage for future advances.
A heuristic approach to optimization of structural topology including self-weight
NASA Astrophysics Data System (ADS)
Tajs-Zielińska, Katarzyna; Bochenek, Bogdan
2018-01-01
Topology optimization of structures under a design-dependent self-weight load is investigated in this paper. The problem deserves attention because of its significant importance in the engineering practice, especially nowadays as topology optimization is more often applied when designing large engineering structures, for example, bridges or carrying systems of tall buildings. It is worth noting that well-known approaches of topology optimization which have been successfully applied to structures under fixed loads cannot be directly adapted to the case of design-dependent loads, so that topology generation can be a challenge also for numerical algorithms. The paper presents the application of a simple but efficient non-gradient method to topology optimization of elastic structures under self-weight loading. The algorithm is based on the Cellular Automata concept, the application of which can produce effective solutions with low computational cost.
NASA Astrophysics Data System (ADS)
Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar
2014-03-01
The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.
Multiplexed Predictive Control of a Large Commercial Turbofan Engine
NASA Technical Reports Server (NTRS)
Richter, hanz; Singaraju, Anil; Litt, Jonathan S.
2008-01-01
Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.
Launch Vehicle Design and Optimization Methods and Priority for the Advanced Engineering Environment
NASA Technical Reports Server (NTRS)
Rowell, Lawrence F.; Korte, John J.
2003-01-01
NASA's Advanced Engineering Environment (AEE) is a research and development program that will improve collaboration among design engineers for launch vehicle conceptual design and provide the infrastructure (methods and framework) necessary to enable that environment. In this paper, three major technical challenges facing the AEE program are identified, and three specific design problems are selected to demonstrate how advanced methods can improve current design activities. References are made to studies that demonstrate these design problems and methods, and these studies will provide the detailed information and check cases to support incorporation of these methods into the AEE. This paper provides background and terminology for discussing the launch vehicle conceptual design problem so that the diverse AEE user community can participate in prioritizing the AEE development effort.
Airfoil optimization for unsteady flows with application to high-lift noise reduction
NASA Astrophysics Data System (ADS)
Rumpfkeil, Markus Peer
The use of steady-state aerodynamic optimization methods in the computational fluid dynamic (CFD) community is fairly well established. In particular, the use of adjoint methods has proven to be very beneficial because their cost is independent of the number of design variables. The application of numerical optimization to airframe-generated noise, however, has not received as much attention, but with the significant quieting of modern engines, airframe noise now competes with engine noise. Optimal control techniques for unsteady flows are needed in order to be able to reduce airframe-generated noise. In this thesis, a general framework is formulated to calculate the gradient of a cost function in a nonlinear unsteady flow environment via the discrete adjoint method. The unsteady optimization algorithm developed in this work utilizes a Newton-Krylov approach since the gradient-based optimizer uses the quasi-Newton method BFGS, Newton's method is applied to the nonlinear flow problem, GMRES is used to solve the resulting linear problem inexactly, and last but not least the linear adjoint problem is solved using Bi-CGSTAB. The flow is governed by the unsteady two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized using structured grids and a finite difference approach. The effectiveness of the unsteady optimization algorithm is demonstrated by applying it to several problems of interest including shocktubes, pulses in converging-diverging nozzles, rotating cylinders, transonic buffeting, and an unsteady trailing-edge flow. In order to address radiated far-field noise, an acoustic wave propagation program based on the Ffowcs Williams and Hawkings (FW-H) formulation is implemented and validated. The general framework is then used to derive the adjoint equations for a novel hybrid URANS/FW-H optimization algorithm in order to be able to optimize the shape of airfoils based on their calculated far-field pressure fluctuations. Validation and application results for this novel hybrid URANS/FW-H optimization algorithm show that it is possible to optimize the shape of an airfoil in an unsteady flow environment to minimize its radiated far-field noise while maintaining good aerodynamic performance.
Optimal trajectories for an aerospace plane. Part 2: Data, tables, and graphs
NASA Technical Reports Server (NTRS)
Miele, Angelo; Lee, W. Y.; Wu, G. D.
1990-01-01
Data, tables, and graphs relative to the optimal trajectories for an aerospace plane are presented. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied for a single aerodynamic model (GHAME) and three engine models. Four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (1) minimization of the weight of fuel consumed; (2) minimization of the peak dynamic pressure; (3) minimization of the peak heating rate; and (4) minimization of the peak tangential acceleration. The above optimization studies are carried out for different combinations of constraints, specifically: initial path inclination that is either free or given; dynamic pressure that is either free or bounded; and tangential acceleration that is either free or bounded.
Data Mining and Optimization Tools for Developing Engine Parameters Tools
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.
1998-01-01
This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. From the total budget of $5,000, Tricia and I studied the problem domain for developing ail Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy datasets. From the study and discussion with NASA LERC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of the data for GA based multi-resolution optimal search. Wavelet processing is proposed to create a coarse resolution representation of data providing two advantages in GA based search: 1. We will have less data to begin with to make search sub-spaces. 2. It will have robustness against the noise because at every level of wavelet based decomposition, we will be decomposing the signal into low pass and high pass filters.
2018-01-01
Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization. PMID:29377956
Zu, Xianghuan; Yang, Chuanlei; Wang, Hechun; Wang, Yinyan
2018-01-01
Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
NASA Astrophysics Data System (ADS)
Joung, InSuk; Kim, Jong Yun; Gross, Steven P.; Joo, Keehyoung; Lee, Jooyoung
2018-02-01
Many problems in science and engineering can be formulated as optimization problems. One way to solve these problems is to develop tailored problem-specific approaches. As such development is challenging, an alternative is to develop good generally-applicable algorithms. Such algorithms are easy to apply, typically function robustly, and reduce development time. Here we provide a description for one such algorithm called Conformational Space Annealing (CSA) along with its python version, PyCSA. We previously applied it to many optimization problems including protein structure prediction and graph community detection. To demonstrate its utility, we have applied PyCSA to two continuous test functions, namely Ackley and Eggholder functions. In addition, in order to provide complete generality of PyCSA to any types of an objective function, we demonstrate the way PyCSA can be applied to a discrete objective function, namely a parameter optimization problem. Based on the benchmarking results of the three problems, the performance of CSA is shown to be better than or similar to the most popular optimization method, simulated annealing. For continuous objective functions, we found that, L-BFGS-B was the best performing local optimization method, while for a discrete objective function Nelder-Mead was the best. The current version of PyCSA can be run in parallel at the coarse grained level by calculating multiple independent local optimizations separately. The source code of PyCSA is available from http://lee.kias.re.kr.
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
Building information modelling review with potential applications in tunnel engineering of China.
Zhou, Weihong; Qin, Haiyang; Qiu, Junling; Fan, Haobo; Lai, Jinxing; Wang, Ke; Wang, Lixin
2017-08-01
Building information modelling (BIM) can be applied to tunnel engineering to address a number of problems, including complex structure, extensive design, long construction cycle and increased security risks. To promote the development of tunnel engineering in China, this paper combines actual cases, including the Xingu mountain tunnel and the Shigu Mountain tunnel, to systematically analyse BIM applications in tunnel engineering in China. The results indicate that BIM technology in tunnel engineering is currently mainly applied during the design stage rather than during construction and operation stages. The application of BIM technology in tunnel engineering covers many problems, such as a lack of standards, incompatibility of different software, disorganized management, complex combination with GIS (Geographic Information System), low utilization rate and poor awareness. In this study, through summary of related research results and engineering cases, suggestions are introduced and an outlook for the BIM application in tunnel engineering in China is presented, which provides guidance for design optimization, construction standards and later operation maintenance.
Building information modelling review with potential applications in tunnel engineering of China
Zhou, Weihong; Qin, Haiyang; Fan, Haobo; Lai, Jinxing; Wang, Ke; Wang, Lixin
2017-01-01
Building information modelling (BIM) can be applied to tunnel engineering to address a number of problems, including complex structure, extensive design, long construction cycle and increased security risks. To promote the development of tunnel engineering in China, this paper combines actual cases, including the Xingu mountain tunnel and the Shigu Mountain tunnel, to systematically analyse BIM applications in tunnel engineering in China. The results indicate that BIM technology in tunnel engineering is currently mainly applied during the design stage rather than during construction and operation stages. The application of BIM technology in tunnel engineering covers many problems, such as a lack of standards, incompatibility of different software, disorganized management, complex combination with GIS (Geographic Information System), low utilization rate and poor awareness. In this study, through summary of related research results and engineering cases, suggestions are introduced and an outlook for the BIM application in tunnel engineering in China is presented, which provides guidance for design optimization, construction standards and later operation maintenance. PMID:28878970
Building information modelling review with potential applications in tunnel engineering of China
NASA Astrophysics Data System (ADS)
Zhou, Weihong; Qin, Haiyang; Qiu, Junling; Fan, Haobo; Lai, Jinxing; Wang, Ke; Wang, Lixin
2017-08-01
Building information modelling (BIM) can be applied to tunnel engineering to address a number of problems, including complex structure, extensive design, long construction cycle and increased security risks. To promote the development of tunnel engineering in China, this paper combines actual cases, including the Xingu mountain tunnel and the Shigu Mountain tunnel, to systematically analyse BIM applications in tunnel engineering in China. The results indicate that BIM technology in tunnel engineering is currently mainly applied during the design stage rather than during construction and operation stages. The application of BIM technology in tunnel engineering covers many problems, such as a lack of standards, incompatibility of different software, disorganized management, complex combination with GIS (Geographic Information System), low utilization rate and poor awareness. In this study, through summary of related research results and engineering cases, suggestions are introduced and an outlook for the BIM application in tunnel engineering in China is presented, which provides guidance for design optimization, construction standards and later operation maintenance.
Topology optimization for nonlinear dynamic problems: Considerations for automotive crashworthiness
NASA Astrophysics Data System (ADS)
Kaushik, Anshul; Ramani, Anand
2014-04-01
Crashworthiness of automotive structures is most often engineered after an optimal topology has been arrived at using other design considerations. This study is an attempt to incorporate crashworthiness requirements upfront in the topology synthesis process using a mathematically consistent framework. It proposes the use of equivalent linear systems from the nonlinear dynamic simulation in conjunction with a discrete-material topology optimizer. Velocity and acceleration constraints are consistently incorporated in the optimization set-up. Issues specific to crash problems due to the explicit solution methodology employed, nature of the boundary conditions imposed on the structure, etc. are discussed and possible resolutions are proposed. A demonstration of the methodology on two-dimensional problems that address some of the structural requirements and the types of loading typical of frontal and side impact is provided in order to show that this methodology has the potential for topology synthesis incorporating crashworthiness requirements.
OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Gray, Justin S.
2012-01-01
The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.
Transient responses' optimization by means of set-based multi-objective evolution
NASA Astrophysics Data System (ADS)
Avigad, Gideon; Eisenstadt, Erella; Goldvard, Alex; Salomon, Shaul
2012-04-01
In this article, a novel solution to multi-objective problems involving the optimization of transient responses is suggested. It is claimed that the common approach of treating such problems by introducing auxiliary objectives overlooks tradeoffs that should be presented to the decision makers. This means that, if at some time during the responses, one of the responses is optimal, it should not be overlooked. An evolutionary multi-objective algorithm is suggested in order to search for these optimal solutions. For this purpose, state-wise domination is utilized with a new crowding measure for ordered sets being suggested. The approach is tested on both artificial as well as on real life problems in order to explain the methodology and demonstrate its applicability and importance. The results indicate that, from an engineering point of view, the approach possesses several advantages over existing approaches. Moreover, the applications highlight the importance of set-based evolution.
Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm
NASA Astrophysics Data System (ADS)
Anam, S.
2017-10-01
Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
NASA Astrophysics Data System (ADS)
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
Multiobjective Multifactorial Optimization in Evolutionary Multitasking.
Gupta, Abhishek; Ong, Yew-Soon; Feng, Liang; Tan, Kay Chen
2016-05-03
In recent decades, the field of multiobjective optimization has attracted considerable interest among evolutionary computation researchers. One of the main features that makes evolutionary methods particularly appealing for multiobjective problems is the implicit parallelism offered by a population, which enables simultaneous convergence toward the entire Pareto front. While a plethora of related algorithms have been proposed till date, a common attribute among them is that they focus on efficiently solving only a single optimization problem at a time. Despite the known power of implicit parallelism, seldom has an attempt been made to multitask, i.e., to solve multiple optimization problems simultaneously. It is contended that the notion of evolutionary multitasking leads to the possibility of automated transfer of information across different optimization exercises that may share underlying similarities, thereby facilitating improved convergence characteristics. In particular, the potential for automated transfer is deemed invaluable from the standpoint of engineering design exercises where manual knowledge adaptation and reuse are routine. Accordingly, in this paper, we present a realization of the evolutionary multitasking paradigm within the domain of multiobjective optimization. The efficacy of the associated evolutionary algorithm is demonstrated on some benchmark test functions as well as on a real-world manufacturing process design problem from the composites industry.
Improving engineering system design by formal decomposition, sensitivity analysis, and optimization
NASA Technical Reports Server (NTRS)
Sobieski, J.; Barthelemy, J. F. M.
1985-01-01
A method for use in the design of a complex engineering system by decomposing the problem into a set of smaller subproblems is presented. Coupling of the subproblems is preserved by means of the sensitivity derivatives of the subproblem solution to the inputs received from the system. The method allows for the division of work among many people and computers.
Production of Engineered Fabrics Using Artificial Neural Network-Genetic Algorithm Hybrid Model
NASA Astrophysics Data System (ADS)
Mitra, Ashis; Majumdar, Prabal Kumar; Banerjee, Debamalya
2015-10-01
The process of fabric engineering which is generally practised in most of the textile mills is very complicated, repetitive, tedious and time consuming. To eliminate this trial and error approach, a new approach of fabric engineering has been attempted in this work. Data sets of construction parameters [comprising of ends per inch, picks per inch, warp count and weft count] and three fabric properties (namely drape coefficient, air permeability and thermal resistance) of 25 handloom cotton fabrics have been used. The weights and biases of three artificial neural network (ANN) models developed for the prediction of drape coefficient, air permeability and thermal resistance were used to formulate the fitness or objective function and constraints of the optimization problem. The optimization problem was solved using genetic algorithm (GA). In both the fabrics which were attempted for engineering, the target and simulated fabric properties were very close. The GA was able to search the optimum set of fabric construction parameters with reasonably good accuracy except in case of EPI. However, the overall result is encouraging and can be improved further by using larger data sets of handloom fabrics by hybrid ANN-GA model.
Discrete Event Supervisory Control Applied to Propulsion Systems
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Shah, Neerav
2005-01-01
The theory of discrete event supervisory (DES) control was applied to the optimal control of a twin-engine aircraft propulsion system and demonstrated in a simulation. The supervisory control, which is implemented as a finite-state automaton, oversees the behavior of a system and manages it in such a way that it maximizes a performance criterion, similar to a traditional optimal control problem. DES controllers can be nested such that a high-level controller supervises multiple lower level controllers. This structure can be expanded to control huge, complex systems, providing optimal performance and increasing autonomy with each additional level. The DES control strategy for propulsion systems was validated using a distributed testbed consisting of multiple computers--each representing a module of the overall propulsion system--to simulate real-time hardware-in-the-loop testing. In the first experiment, DES control was applied to the operation of a nonlinear simulation of a turbofan engine (running in closed loop using its own feedback controller) to minimize engine structural damage caused by a combination of thermal and structural loads. This enables increased on-wing time for the engine through better management of the engine-component life usage. Thus, the engine-level DES acts as a life-extending controller through its interaction with and manipulation of the engine s operation.
NASA Astrophysics Data System (ADS)
Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng
2013-07-01
High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.
Generation of structural topologies using efficient technique based on sorted compliances
NASA Astrophysics Data System (ADS)
Mazur, Monika; Tajs-Zielińska, Katarzyna; Bochenek, Bogdan
2018-01-01
Topology optimization, although well recognized is still widely developed. It has gained recently more attention since large computational ability become available for designers. This process is stimulated simultaneously by variety of emerging, innovative optimization methods. It is observed that traditional gradient-based mathematical programming algorithms, in many cases, are replaced by novel and e cient heuristic methods inspired by biological, chemical or physical phenomena. These methods become useful tools for structural optimization because of their versatility and easy numerical implementation. In this paper engineering implementation of a novel heuristic algorithm for minimum compliance topology optimization is discussed. The performance of the topology generator is based on implementation of a special function utilizing information of compliance distribution within the design space. With a view to cope with engineering problems the algorithm has been combined with structural analysis system Ansys.
ERIC Educational Resources Information Center
Mahavier, W. Ted
2002-01-01
Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tupek, Michael R.
2016-06-30
In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- putmore » parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.« less
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
NASA Astrophysics Data System (ADS)
Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.
2012-09-01
Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.
Generalized networking engineering: optimal pricing and routing in multiservice networks
NASA Astrophysics Data System (ADS)
Mitra, Debasis; Wang, Qiong
2002-07-01
One of the functions of network engineering is to allocate resources optimally to forecasted demand. We generalize the mechanism by incorporating price-demand relationships into the problem formulation, and optimizing pricing and routing jointly to maximize total revenue. We consider a network, with fixed topology and link bandwidths, that offers multiple services, such as voice and data, each having characteristic price elasticity of demand, and quality of service and policy requirements on routing. Prices, which depend on service type and origin-destination, determine demands, that are routed, subject to their constraints, so as to maximize revenue. We study the basic properties of the optimal solution and prove that link shadow costs provide the basis for both optimal prices and optimal routing policies. We investigate the impact of input parameters, such as link capacities and price elasticities, on prices, demand growth, and routing policies. Asymptotic analyses, in which network bandwidth is scaled to grow, give results that are noteworthy for their qualitative insights. Several numerical examples illustrate the analyses.
Product Mix Selection Using AN Evolutionary Technique
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Vasant, Pandian
2009-08-01
This paper proposes an evolutionary technique for the solution of a real—life industrial problem and particular for the product mix selection problem. The evolutionary technique is a combination of a genetic algorithm that preserves the feasibility of the trial solutions with penalties and some local optimization method. The goal of this paper has been achieved in finding the best near optimal solution for the profit fitness function respect to vagueness factor and level of satisfaction. The findings of the profit values will be very useful for the decision makers in the industrial engineering sector for the implementation purpose. It's possible to improve the solutions obtained in this study by employing other meta-heuristic methods such as simulated annealing, tabu Search, ant colony optimization, particle swarm optimization and artificial immune systems.
Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouri, Drew Philip; Surowiec, Thomas M.
Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less
Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization
Kouri, Drew Philip; Surowiec, Thomas M.
2018-06-05
Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less
The Research of Software Engineering Curriculum Reform
NASA Astrophysics Data System (ADS)
Kuang, Li-Qun; Han, Xie
With the problem that software engineering training can't meet the needs of the community, this paper analysis some outstanding reasons in software engineering curriculum teaching, such as old teaching contents, weak in practice and low quality of teachers etc. We propose the methods of teaching reform as guided by market demand, update the teaching content, optimize the teaching methods, reform the teaching practice, strengthen the teacher-student exchange and promote teachers and students together. We carried out the reform and explore positive and achieved the desired results.
NASA Astrophysics Data System (ADS)
Pozharskiy, Dmitry
In recent years a nonlinear, acoustic metamaterial, named granular crystals, has gained prominence due to its high accessibility, both experimentally and computationally. The observation of a wide range of dynamical phenomena in the system, due to its inherent nonlinearities, has suggested its importance in many engineering applications related to wave propagation. In the first part of this dissertation, we explore the nonlinear dynamics of damped-driven granular crystals. In one case, we consider a highly nonlinear setting, also known as a sonic vacuum, and derive a nonlinear analogue of a linear spectrum, corresponding to resonant periodic propagation and antiresonances. Experimental studies confirm the computational findings and the assimilation of experimental data into a numerical model is demonstrated. In the second case, global bifurcations in a precompressed granular crystal are examined, and their involvement in the appearance of chaotic dynamics is demonstrated. Both results highlight the importance of exploring the nonlinear dynamics, to gain insight into how a granular crystal responds to different external excitations. In the second part, we borrow established ideas from coarse-graining of dynamical systems, and extend them to optimization problems. We combine manifold learning algorithms, such as Diffusion Maps, with stochastic optimization methods, such as Simulated Annealing, and show that we can retrieve an ensemble, of few, important parameters that should be explored in detail. This framework can lead to acceleration of convergence when dealing with complex, high-dimensional optimization, and could potentially be applied to design engineered granular crystals.
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.
Progress in multidisciplinary design optimization at NASA Langley
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
1993-01-01
Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.
A knowledge-based tool for multilevel decomposition of a complex design problem
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.
Efficient protocols for Stirling heat engines at the micro-scale
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo; Schwieger, Kay
2015-10-01
We investigate the thermodynamic efficiency of sub-micro-scale Stirling heat engines operating under the conditions described by overdamped stochastic thermodynamics. We show how to construct optimal protocols such that at maximum power the efficiency attains for constant isotropic mobility the universal law η=2 ηC/(4-ηC) , where ηC is the efficiency of an ideal Carnot cycle. We show that these protocols are specified by the solution of an optimal mass transport problem. Such solution can be determined explicitly using well-known Monge-Ampère-Kantorovich reconstruction algorithms. Furthermore, we show that the same law describes the efficiency of heat engines operating at maximum work over short time periods. Finally, we illustrate the straightforward extension of these results to cases when the mobility is anisotropic and temperature dependent.
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Barthelemy, J.-F. M.
1986-01-01
An expert system called EXADS has been developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. ADS has approximately 100 combinations of strategy, optimizer, and one-dimensional search options from which to choose. It is difficult for a nonexpert to make this choice. This expert system aids the user in choosing the best combination of options based on the users knowledge of the problem and the expert knowledge stored in the knowledge base. The knowledge base is divided into three categories; constrained problems, unconstrained problems, and constrained problems being treated as unconstrained problems. The inference engine and rules are written in LISP, contains about 200 rules, and executes on DEC-VAX (with Franz-LISP) and IBM PC (with IQ-LISP) computers.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Horsetail matching: a flexible approach to optimization under uncertainty
NASA Astrophysics Data System (ADS)
Cook, L. W.; Jarrett, J. P.
2018-04-01
It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed. PMID:27926946
Optimal trajectories of aircraft and spacecraft
NASA Technical Reports Server (NTRS)
Miele, A.
1990-01-01
Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.
GA-optimization for rapid prototype system demonstration
NASA Technical Reports Server (NTRS)
Kim, Jinwoo; Zeigler, Bernard P.
1994-01-01
An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.
Design Optimization of Systems Governed by Partial Differential Equations. Phase 1
1989-03-01
DIFFERENTIAL EQUATIONS" SUBMITTED TO: AIR FORCE OFFICE OF SCIENTIFIC RESEARCH AFOSR/NM ATTN: Major James Crowley BUILDING 410, ROOM 209 BOLLING AFB, DC 20332...of his algorithms called DELIGHT. We consider this work to be of signal importance for the future of all engineer- ing design optimization. Prof...to be set up in a subroutine, which would be called by the optimization code. We then intended to pursue a slow and orderly progression of the problem
Astronomy Village Reaches for New Heights
NASA Astrophysics Data System (ADS)
Croft, S. K.; Pompea, S. M.
2007-12-01
We are developing a set of complex, multimedia-based instructional modules emphasizing technical and scientific issues related to Giant Segmented Mirror Telescope project. The modules" pedagogy will be open-ended and problem-based to promote development of problem-solving skills. Problem- based-learning modules that emphasize work on open-ended complex real world problems are particularly valuable in illustrating and promoting a perspective on the process of science and engineering. Research in this area shows that these kinds of learning experiences are superior to more conventional student training in terms of gains in student learning. The format for the modules will be based on the award-winning multi-media educational Astronomy Village products that present students with a simulated environment: a mountaintop community surrounded by a cluster of telescopes, satellite receivers, and telecommunication towers. A number of "buildings" are found in the Village, such as a library, a laboratory, and an auditorium. Each building contains an array of information sources and computer simulations. Students navigate through their research with a mentor via imbedded video. The first module will be "Observatory Site Selection." Students will use astronomical data, basic weather information, and sky brightness data to select the best site for an observatory. Students will investigate the six GSMT sites considered by the professional site selection teams. Students will explore weather and basic site issues (e.g., roads and topography) using remote sensing images, computational fluid dynamics results, turbulence profiles, and scintillation of the different sites. Comparison of student problem solving with expert problem solving will also be done as part of the module. As part of a site selection team they will have to construct a case and present it on why they chose a particular site. The second module will address aspects of system engineering and optimization for a GSMT-like telescope. Basic system issues will be addressed and studied. These might include various controls issues and optimization issues such as mirror figure, mirror support stability, and wind loading trade-offs. Using system modeling and system optimization results from existing and early GSMT trade studies, we will create a simulation where students are part of an engineering design and optimization team. They will explore the cost/performance/schedule issues associate with the GSMT design.
Wu, Xueyun; Yang, Dong; Zhu, Xiangcheng; Feng, Zhiyang; Lv, Zhengbin; Zhang, Yaozhou; Shen, Ben; Xu, Zhinan
2011-01-01
The heterologous production of iso-migrastatin (iso-MGS) was successfully demonstrated in an engineered S. lividans SB11002 strain, which was derived from S. lividans K4–114, following introduction of pBS11001, which harbored the entire mgs biosynthetic gene cluster. However, under similar fermentation conditions, the iso-MGS titer in the engineered strain was significantly lower than that in the native producer - Streptomyces platensis NRRL 18993. To circumvent the problem of low iso-MGS titers and to expand the utility of this heterologous system for iso-MGS biosynthesis and engineering, systematic optimization of the fermentation medium was carried out. The effects of major components in the cultivation medium, including carbon, organic and inorganic nitrogen sources, were investigated using a single factor optimization method. As a result, sucrose and yeast extract were determined to be the best carbon and organic nitrogen sources, resulting in optimized iso-MGS production. Conversely, all other inorganic nitrogen sources evaluated produced various levels of inhibition of iso-MGS production. The final optimized R2YE production medium produced iso-MGS with a titer of 86.5 mg/L, about 3.6-fold higher than that in the original R2YE medium, and 1.5 fold higher than that found within the native S. platensis NRRL 18993 producer. PMID:21625393
Fuzzy linear model for production optimization of mining systems with multiple entities
NASA Astrophysics Data System (ADS)
Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar
2011-12-01
Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
Optimization in optical systems revisited: Beyond genetic algorithms
NASA Astrophysics Data System (ADS)
Gagnon, Denis; Dumont, Joey; Dubé, Louis
2013-05-01
Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
L^1 -optimality conditions for the circular restricted three-body problem
NASA Astrophysics Data System (ADS)
Chen, Zheng
2016-11-01
In this paper, the L^1 -minimization for the translational motion of a spacecraft in the circular restricted three-body problem (CRTBP) is considered. Necessary conditions are derived by using the Pontryagin Maximum Principle (PMP), revealing the existence of bang-bang and singular controls. Singular extremals are analyzed, recalling the existence of the Fuller phenomenon according to the theories developed in (Marchal in J Optim Theory Appl 11(5):441-486, 1973; Zelikin and Borisov in Theory of Chattering Control with Applications to Astronautics, Robotics, Economics, and Engineering. Birkhäuser, Basal 1994; in J Math Sci 114(3):1227-1344, 2003). The sufficient optimality conditions for the L^1 -minimization problem with fixed endpoints have been developed in (Chen et al. in SIAM J Control Optim 54(3):1245-1265, 2016). In the current paper, we establish second-order conditions for optimal control problems with more general final conditions defined by a smooth submanifold target. In addition, the numerical implementation to check these optimality conditions is given. Finally, approximating the Earth-Moon-Spacecraft system by the CRTBP, an L^1 -minimization trajectory for the translational motion of a spacecraft is computed by combining a shooting method with a continuation method in (Caillau et al. in Celest Mech Dyn Astron 114:137-150, 2012; Caillau and Daoud in SIAM J Control Optim 50(6):3178-3202, 2012). The local optimality of the computed trajectory is asserted thanks to the second-order optimality conditions developed.
Jointly Optimal Design for MIMO Radar Frequency-Hopping Waveforms Using Game Theory
2016-04-01
Washington University in St . Louis St . Louis, MO, USA Using a colocated multiple input/multiple output (MIMO) radar system, we consider the problem of...Authors’ address: Preston M. Green Department of Electrical and Systems Engineering, Washington University in St . Louis, St . Louis, MO, 63130...engineering from Washington University in St . Louis, under the guidance of Dr. Arye Nehorai, in 2012 and 2015, respectively. His research interests
An introduction to optimal power flow: Theory, formulation, and examples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Stephen; Rebennack, Steffen
The set of optimization problems in electric power systems engineering known collectively as Optimal Power Flow (OPF) is one of the most practically important and well-researched subfields of constrained nonlinear optimization. OPF has enjoyed a rich history of research, innovation, and publication since its debut five decades ago. Nevertheless, entry into OPF research is a daunting task for the uninitiated--both due to the sheer volume of literature and because OPF's ubiquity within the electric power systems community has led authors to assume a great deal of prior knowledge that readers unfamiliar with electric power systems may not possess. This articlemore » provides an introduction to OPF from an operations research perspective; it describes a complete and concise basis of knowledge for beginning OPF research. The discussion is tailored for the operations researcher who has experience with nonlinear optimization but little knowledge of electrical engineering. Topics covered include power systems modeling, the power flow equations, typical OPF formulations, and common OPF extensions.« less
D'Elia, Marta; Perego, Mauro; Bochev, Pavel B.; ...
2015-12-21
We develop and analyze an optimization-based method for the coupling of nonlocal and local diffusion problems with mixed volume constraints and boundary conditions. The approach formulates the coupling as a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the nonlocal and local domains, and the controls are virtual volume constraints and boundary conditions. When some assumptions on the kernel functions hold, we prove that the resulting optimization problem is well-posed and discuss its implementation using Sandia’s agile software components toolkit. As a result,more » the latter provides the groundwork for the development of engineering analysis tools, while numerical results for nonlocal diffusion in three-dimensions illustrate key properties of the optimization-based coupling method.« less
NASA Astrophysics Data System (ADS)
Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun
2014-11-01
This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.
NASA Astrophysics Data System (ADS)
Kassa, Semu Mitiku; Tsegay, Teklay Hailay
2017-08-01
Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.
Reverse engineering time discrete finite dynamical systems: a feasible undertaking?
Delgado-Eckert, Edgar
2009-01-01
With the advent of high-throughput profiling methods, interest in reverse engineering the structure and dynamics of biochemical networks is high. Recently an algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is a top-down approach using time discrete dynamical systems. One of its key steps includes the choice of a term order, a technicality imposed by the use of Gröbner-bases calculations. The aim of this paper is to identify minimal requirements on data sets to be used with this algorithm and to characterize optimal data sets. We found minimal requirements on a data set based on how many terms the functions to be reverse engineered display. Furthermore, we identified optimal data sets, which we characterized using a geometric property called "general position". Moreover, we developed a constructive method to generate optimal data sets, provided a codimensional condition is fulfilled. In addition, we present a generalization of their algorithm that does not depend on the choice of a term order. For this method we derived a formula for the probability of finding the correct model, provided the data set used is optimal. We analyzed the asymptotic behavior of the probability formula for a growing number of variables n (i.e. interacting chemicals). Unfortunately, this formula converges to zero as fast as , where and . Therefore, even if an optimal data set is used and the restrictions in using term orders are overcome, the reverse engineering problem remains unfeasible, unless prodigious amounts of data are available. Such large data sets are experimentally impossible to generate with today's technologies.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
NASA Astrophysics Data System (ADS)
Villanueva Perez, Carlos Hernan
Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckner, B.L.; Xong, X.
1995-12-31
A method for optimizing the net present value of a full field development by varying the placement and sequence of production wells is presented. This approach is automated and combines an economics package and Mobil`s in-house simulator, PEGASUS, within a simulated annealing optimization engine. A novel framing of the well placement and scheduling problem as a classic {open_quotes}travelling salesman problem{close_quotes} is required before optimization via simulated annealing can be applied practically. An example of a full field development using this technique shows that non-uniform well spacings are optimal (from an NPV standpoint) when the effects of well interference and variablemore » reservoir properties are considered. Examples of optimizing field NPV with variable well costs also show that non-uniform wells spacings are optimal. Project NPV increases of 25 to 30 million dollars were shown using the optimal, nonuniform development versus reasonable, uniform developments. The ability of this technology to deduce these non-uniform well spacings opens up many potential applications that should materially impact the economic performance of field developments.« less
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Mathematical logic as a mean of solving the problems of power supply for buildings and constructions
NASA Astrophysics Data System (ADS)
Pryadko, Igor; Nozdrina, Ekaterina; Boltaevsky, Andrey
2017-10-01
The article analyzes the questions of application of mathematical logic in engineering design associated with machinery and construction. The aim of the work is to study the logical working-out of Russian electrical engineer V.I. Shestakov. These elaborations are considered in connection with the problem of analysis and synthesis of relay contact circuits of the degenerate (A) class which the scientist solved. The article proposes to use Shestakov’s elaborations for optimization of buildings and constructions of modern high-tech. In the second part of the article the events are actualized in association with the development of problems of application of mathematical logic in the analysis and synthesis of electric circuits, relay and bridging. The arguments in favor of the priority of the authorship of the elaborations of Russian electrical engineer V. I. Shestakov, K. Shannon - one of the founders of computer science, and Japanese engineer A. Nakashima are discussed. The issue of contradiction between V. I. Shestakov and representatives of the school of M. A. Gavrilov is touched on.
Weight optimization of plane truss using genetic algorithm
NASA Astrophysics Data System (ADS)
Neeraja, D.; Kamireddy, Thejesh; Santosh Kumar, Potnuru; Simha Reddy, Vijay
2017-11-01
Optimization of structure on basis of weight has many practical benefits in every engineering field. The efficiency is proportionally related to its weight and hence weight optimization gains prime importance. Considering the field of civil engineering, weight optimized structural elements are economical and easier to transport to the site. In this study, genetic optimization algorithm for weight optimization of steel truss considering its shape, size and topology aspects has been developed in MATLAB. Material strength and Buckling stability have been adopted from IS 800-2007 code of construction steel. The constraints considered in the present study are fabrication, basic nodes, displacements, and compatibility. Genetic programming is a natural selection search technique intended to combine good solutions to a problem from many generations to improve the results. All solutions are generated randomly and represented individually by a binary string with similarities of natural chromosomes, and hence it is termed as genetic programming. The outcome of the study is a MATLAB program, which can optimise a steel truss and display the optimised topology along with element shapes, deflections, and stress results.
An improved harmony search algorithm for emergency inspection scheduling
NASA Astrophysics Data System (ADS)
Kallioras, Nikos A.; Lagaros, Nikos D.; Karlaftis, Matthew G.
2014-11-01
The ability of nature-inspired search algorithms to efficiently handle combinatorial problems, and their successful implementation in many fields of engineering and applied sciences, have led to the development of new, improved algorithms. In this work, an improved harmony search (IHS) algorithm is presented, while a holistic approach for solving the problem of post-disaster infrastructure management is also proposed. The efficiency of IHS is compared with that of the algorithms of particle swarm optimization, differential evolution, basic harmony search and the pure random search procedure, when solving the districting problem that is the first part of post-disaster infrastructure management. The ant colony optimization algorithm is employed for solving the associated routing problem that constitutes the second part. The comparison is based on the quality of the results obtained, the computational demands and the sensitivity on the algorithmic parameters.
NASA Astrophysics Data System (ADS)
Faitar, C.; Novac, I.
2017-08-01
Today, the concept of energy efficiency or energy optimization in ships has become one of the main problems of engineers in the whole world. To increase the fiability of a crude oil super tanker ship it means, among other things, to improve the energy performance and optimize the fuel consumption of ship through the development of engines and propulsion system or using alternative energies. Also, the importance of having an effective and reliable Power Management System (PMS) in a vessel operating system means to reduce operational costs and maintain power system of machine parts working in minimum stress in all operating conditions. Studying the Energy Efficiency Design Index and Energy Efficiency Operational Indicator for a crude oil super tanker ship, it allows us to study the reconfiguration of ship power system introducing new generation systems.
Minimal complexity control law synthesis
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.
1989-01-01
A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Wang, B. P.; Yoo, Y.; Clark, B.
1973-01-01
A description and applications of a computer capability for determining the ultimate optimal behavior of a dynamically loaded structural-mechanical system are presented. This capability provides characteristics of the theoretically best, or limiting, design concept according to response criteria dictated by design requirements. Equations of motion of the system in first or second order form include incompletely specified elements whose characteristics are determined in the optimization of one or more performance indices subject to the response criteria in the form of constraints. The system is subject to deterministic transient inputs, and the computer capability is designed to operate with a large linear programming on-the-shelf software package which performs the desired optimization. The report contains user-oriented program documentation in engineering, problem-oriented form. Applications cover a wide variety of dynamics problems including those associated with such diverse configurations as a missile-silo system, impacting freight cars, and an aircraft ride control system.
Autorotation flight control system
NASA Technical Reports Server (NTRS)
Bachelder, Edward N. (Inventor); Aponso, Bimal L. (Inventor); Lee, Dong-Chan (Inventor)
2011-01-01
The present invention provides computer implemented methodology that permits the safe landing and recovery of rotorcraft following engine failure. With this invention successful autorotations may be performed from well within the unsafe operating area of the height-velocity profile of a helicopter by employing the fast and robust real-time trajectory optimization algorithm that commands control motion through an intuitive pilot display, or directly in the case of autonomous rotorcraft. The algorithm generates optimal trajectories and control commands via the direct-collocation optimization method, solved using a nonlinear programming problem solver. The control inputs computed are collective pitch and aircraft pitch, which are easily tracked and manipulated by the pilot or converted to control actuator commands for automated operation during autorotation in the case of an autonomous rotorcraft. The formulation of the optimal control problem has been carefully tailored so the solutions resemble those of an expert pilot, accounting for the performance limitations of the rotorcraft and safety concerns.
The role of optimization in the next generation of computer-based design tools
NASA Technical Reports Server (NTRS)
Rogan, J. Edward
1989-01-01
There is a close relationship between design optimization and the emerging new generation of computer-based tools for engineering design. With some notable exceptions, the development of these new tools has not taken full advantage of recent advances in numerical design optimization theory and practice. Recent work in the field of design process architecture has included an assessment of the impact of next-generation computer-based design tools on the design process. These results are summarized, and insights into the role of optimization in a design process based on these next-generation tools are presented. An example problem has been worked out to illustrate the application of this technique. The example problem - layout of an aircraft main landing gear - is one that is simple enough to be solved by many other techniques. Although the mathematical relationships describing the objective function and constraints for the landing gear layout problem can be written explicitly and are quite straightforward, an approximation technique has been used in the solution of this problem that can just as easily be applied to integrate supportability or producibility assessments using theory of measurement techniques into the design decision-making process.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
NASA Technical Reports Server (NTRS)
Jaggers, R. F.
1977-01-01
A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.
Problems With Deployment of Multi-Domained, Multi-Homed Mobile Networks
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2008-01-01
This document describes numerous problems associated with deployment of multi-homed mobile platforms consisting of multiple networks and traversing large geographical areas. The purpose of this document is to provide insight to real-world deployment issues and provide information to groups that are addressing many issues related to multi-homing, policy-base routing, route optimization and mobile security - particularly those groups within the Internet Engineering Task Force.
NASA Technical Reports Server (NTRS)
Chen, Robert T. N.; Zhao, Yi-Yuan; Aiken, Edwin W. (Technical Monitor)
1995-01-01
Engine failure represents a major safety concern to helicopter operations, especially in the critical flight phases of takeoff and landing from/to small, confined areas. As a result, the JAA and FAA both certificate a transport helicopter as either Category-A or Category-B according to the ability to continue its operations following engine failures. A Category-B helicopter must be able to land safely in the event of one or all engine failures. There is no requirement, however, for continued flight capability. In contrast, Category-A certification, which applies to multi-engine transport helicopters with independent engine systems, requires that they continue the flight with one engine inoperative (OEI). These stringent requirements, while permitting its operations from rooftops and oil rigs and flight to areas where no emergency landing sites are available, restrict the payload of a Category-A transport helicopter to a value safe for continued flight as well as for landing with one engine inoperative. The current certification process involves extensive flight tests, which are potentially dangerous, costly, and time consuming. These tests require the pilot to simulate engine failures at increasingly critical conditions, Flight manuals based on these tests tend to provide very conservative recommendations with regard to maximum takeoff weight or required runway length. There are very few theoretical studies on this subject to identify the fundamental parameters and tradeoff factors involved. Furthermore, a capability for real-time generation of OEI optimal trajectories is very desirable for providing timely cockpit display guidance to assist the pilot in reducing his workload and to increase safety in a consistent and reliable manner. A joint research program involving NASA Ames Research Center, the FAA, and the University of Minnesota is being conducted to determine OEI optimal control strategies and the associated optimal,trajectories for continued takeoff (CTO), rejected takeoff (RTO), balked landing (BL), and continued landing (CL) for a twin engine helicopter in both VTOL and STOL terminal-area operations. This proposed paper will present the problem formulation, the optimal control solution methods, and the key results of the trajectory optimization studies for both STOL and VTOL OEI operations. In addition, new results concerning the recently developed methodology, which enable a real-time generation of optimal OEI trajectories, will be presented in the paper. This new real-time capability was developed to support the second piloted simulator investigation on cockpit displays for Category-A operations being scheduled for the NASA Ames Vertical Motion Simulator in June-August of 1995. The first VMS simulation was conducted in 1994 and reported.
Discussion on teaching reform of environmental planning and management
NASA Astrophysics Data System (ADS)
Zhang, Qiugen; Chen, Suhua; Xie, Yu; Wei, Li'an; Ding, Yuan
2018-05-01
The curriculum of environmental planning and management is an environmental engineering major curriculum established by the teaching steering committee of environmental science and engineering of Education Ministry, which is the core curriculum of Chinese engineering education professional certification. It plays an important role in cultivating environmental planning and environmental management ability of environmental engineering major. The selection and optimization of the course teaching content of environmental planning and management were discussed which including curriculum teaching content updating and optimizing and teaching resource system construction. The comprehensive application of teaching method was discussed which including teaching method synthesis and teaching method. The final combination of the assessment method was also discussed which including the formative assessment normal grades and the final result of the course examination. Through the curriculum comprehensive teaching reform, students' knowledge had been broadened, the subject status and autonomy of learning had been enhanced, students' learning interest had been motivated, the ability of students' finding, analyzing and solving problems had been improved. Students' innovative ability and positive spirit had been well cultivated.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
Particle swarm optimization with recombination and dynamic linkage discovery.
Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung
2007-12-01
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.
Design feasibility via ascent optimality for next-generation spacecraft
NASA Astrophysics Data System (ADS)
Miele, A.; Mancuso, S.
This paper deals with the optimization of the ascent trajectories for single-stage-sub-orbit (SSSO), single-stage-to-orbit (SSTO), and two-stage-to-orbit (TSTO) rocket-powered spacecraft. The maximum payload weight problem is studied for different values of the engine specific impulse and spacecraft structural factor. The main conclusions are that: feasibility of SSSO spacecraft is guaranteed for all the parameter combinations considered; feasibility of SSTO spacecraft depends strongly on the parameter combination chosen; not only feasibility of TSTO spacecraft is guaranteed for all the parameter combinations considered, but the TSTO payload is several times the SSTO payload. Improvements in engine specific impulse and spacecraft structural factor are desirable and crucial for SSTO feasibility; indeed, aerodynamic improvements do not yield significant improvements in payload. For SSSO, SSTO, and TSTO spacecraft, simple engineering approximations are developed connecting the maximum payload weight to the engine specific impulse and spacecraft structural factor. With reference to the specific impulse/structural factor domain, these engineering approximations lead to the construction of zero-payload lines separating the feasibility region (positive payload) from the unfeasibility region (negative payload).
Choi, Sol; Kim, Hyun Uk; Kim, Tae Yong; Lee, Sang Yup
2016-11-01
To address climate change and environmental problems, it is becoming increasingly important to establish biorefineries for the production of chemicals from renewable non-food biomass. Here we report the development of Escherichia coli strains capable of overproducing a four-carbon platform chemical 4-hybroxybutyric acid (4-HB). Because 4-HB production is significantly affected by aeration level, genome-scale metabolic model-based engineering strategies were designed under aerobic and microaerobic conditions with emphasis on oxidative/reductive TCA branches and glyoxylate shunt. Several different metabolic engineering strategies were employed to develop strains suitable for fermentation both under aerobic and microaerobic conditions. It was found that microaerobic condition was more efficient than aerobic condition in achieving higher titer and productivity of 4-HB. The final engineered strain produced 103.4g/L of 4-HB by microaerobic fed-batch fermentation using glycerol. The aeration-dependent optimization strategy of TCA cycle will be useful for developing microbial strains producing other reduced derivative chemicals of TCA cycle intermediates. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Measuring System Value in the Ares 1 Rocket Using an Uncertainty-Based Coupling Analysis Approach
NASA Astrophysics Data System (ADS)
Wenger, Christopher
Coupling of physics in large-scale complex engineering systems must be correctly accounted for during the systems engineering process to ensure no unanticipated behaviors or unintended consequences arise in the system during operation. Structural vibration of large segmented solid rocket motors, known as thrust oscillation, is a well-documented problem that can affect the health and safety of any crew onboard. Within the Ares 1 rocket, larger than anticipated vibrations were recorded during late stage flight that propagated from the engine chamber to the Orion crew module. Upon investigation engineers found the root cause to be the structure of the rockets feedback onto fluid flow within the engine. The goal of this paper is to showcase a coupling strength analysis from the field of Multidisciplinary Design Optimization to identify the major impacts that caused the Thrust Oscillation event in the Ares 1. Once identified an uncertainty analysis of the coupled system using an uncertainty based optimization technique is used to identify the likelihood of occurrence for these strong or weak interactions to take place.
NASA Technical Reports Server (NTRS)
Lyon, Jeffery A.
1995-01-01
Optimal control theory is employed to determine the performance of abort to orbit (ATO) and return to launch site (RTLS) maneuvers for a single-stage to orbit vehicle. The vehicle configuration examined is a seven engine, winged-body vehicle, that lifts-off vertically and lands horizontally. The abort maneuvers occur as the vehicle ascends to orbit and are initiated when the vehicle suffers an engine failure. The optimal control problems are numerically solved in discretized form via a nonlinear programming (NLP) algorithm. A description highlighting the attributes of this NLP method is provided. ATO maneuver results show that the vehicle is capable of ascending to orbit with a single engine failure at lift-off. Two engine out ATO maneuvers are not possible from the launch pad, but are possible after launch when the thrust to weight ratio becomes sufficiently large. Results show that single engine out RTLS maneuvers can be made for up to 180 seconds after lift-off and that there are scenarios for which RTLS maneuvers should be performed instead of ATP maneuvers.
Making objective decisions in mechanical engineering problems
NASA Astrophysics Data System (ADS)
Raicu, A.; Oanta, E.; Sabau, A.
2017-08-01
Decision making process has a great influence in the development of a given project, the goal being to select an optimal choice in a given context. Because of its great importance, the decision making was studied using various science methods, finally being conceived the game theory that is considered the background for the science of logical decision making in various fields. The paper presents some basic ideas regarding the game theory in order to offer the necessary information to understand the multiple-criteria decision making (MCDM) problems in engineering. The solution is to transform the multiple-criteria problem in a one-criterion decision problem, using the notion of utility, together with the weighting sum model or the weighting product model. The weighted importance of the criteria is computed using the so-called Step method applied to a relation of preferences between the criteria. Two relevant examples from engineering are also presented. The future directions of research consist of the use of other types of criteria, the development of computer based instruments for decision making general problems and to conceive a software module based on expert system principles to be included in the Wiki software applications for polymeric materials that are already operational.
NASA Astrophysics Data System (ADS)
Nuh, M. Z.; Nasir, N. F.
2017-08-01
Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.
NASA Astrophysics Data System (ADS)
Ivashkin, V. V.; Krylov, I. V.
2014-03-01
The problem of optimization of a spacecraft transfer to the Apophis asteroid is investigated. The scheme of transfer under analysis includes a geocentric stage of boosting the spacecraft with high thrust, a heliocentric stage of control by a low thrust engine, and a stage of deceleration with injection to an orbit of the asteroid's satellite. In doing this, the problem of optimal control is solved for cases of ideal and piecewise-constant low thrust, and the optimal magnitude and direction of spacecraft's hyperbolic velocity "at infinity" during departure from the Earth are determined. The spacecraft trajectories are found based on a specially developed comprehensive method of optimization. This method combines the method of dynamic programming at the first stage of analysis and the Pontryagin maximum principle at the concluding stage, together with the parameter continuation method. The estimates are obtained for the spacecraft's final mass and for the payload mass that can be delivered to the asteroid using the Soyuz-Fregat carrier launcher.
Use of multilevel modeling for determining optimal parameters of heat supply systems
NASA Astrophysics Data System (ADS)
Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.
2017-07-01
The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.
Global optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Arora, Jasbir S.
1990-01-01
The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Lin, Na; Chen, Hanning; Jing, Shikai; Liu, Fang; Liang, Xiaodan
2017-03-01
In recent years, symbiosis as a rich source of potential engineering applications and computational model has attracted more and more attentions in the adaptive complex systems and evolution computing domains. Inspired by different symbiotic coevolution forms in nature, this paper proposed a series of multi-swarm particle swarm optimizers called PS 2 Os, which extend the single population particle swarm optimization (PSO) algorithm to interacting multi-swarms model by constructing hierarchical interaction topologies and enhanced dynamical update equations. According to different symbiotic interrelationships, four versions of PS 2 O are initiated to mimic mutualism, commensalism, predation, and competition mechanism, respectively. In the experiments, with five benchmark problems, the proposed algorithms are proved to have considerable potential for solving complex optimization problems. The coevolutionary dynamics of symbiotic species in each PS 2 O version are also studied respectively to demonstrate the heterogeneity of different symbiotic interrelationships that effect on the algorithm's performance. Then PS 2 O is used for solving the radio frequency identification (RFID) network planning (RNP) problem with a mixture of discrete and continuous variables. Simulation results show that the proposed algorithm outperforms the reference algorithms for planning RFID networks, in terms of optimization accuracy and computation robustness.
Energy Efficient Waste Heat Recovery from an Engine Exhaust System
2016-12-01
targets. Since solar panels and wind turbines will not work for ships; the energy savings must come from making the existing power generation...achieve an approximate solution to the problem . The research for this thesis involved design by analysis of heat exchange in a gas turbine exhaust...effectiveness of a new style of heat exchanger for waste heat recovery. The new design sought to optimize heat recovery from a gas turbine engine exhaust as
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
A Rigorous Framework for Optimization of Expensive Functions by Surrogates
NASA Technical Reports Server (NTRS)
Booker, Andrew J.; Dennis, J. E., Jr.; Frank, Paul D.; Serafini, David B.; Torczon, Virginia; Trosset, Michael W.
1998-01-01
The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which design application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31-variable helicopter rotor blade design example and for a standard optimization test example.
Software Performs Complex Design Analysis
NASA Technical Reports Server (NTRS)
2008-01-01
Designers use computational fluid dynamics (CFD) to gain greater understanding of the fluid flow phenomena involved in components being designed. They also use finite element analysis (FEA) as a tool to help gain greater understanding of the structural response of components to loads, stresses and strains, and the prediction of failure modes. Automated CFD and FEA engineering design has centered on shape optimization, which has been hindered by two major problems: 1) inadequate shape parameterization algorithms, and 2) inadequate algorithms for CFD and FEA grid modification. Working with software engineers at Stennis Space Center, a NASA commercial partner, Optimal Solutions Software LLC, was able to utilize its revolutionary, one-of-a-kind arbitrary shape deformation (ASD) capability-a major advancement in solving these two aforementioned problems-to optimize the shapes of complex pipe components that transport highly sensitive fluids. The ASD technology solves the problem of inadequate shape parameterization algorithms by allowing the CFD designers to freely create their own shape parameters, therefore eliminating the restriction of only being able to use the computer-aided design (CAD) parameters. The problem of inadequate algorithms for CFD grid modification is solved by the fact that the new software performs a smooth volumetric deformation. This eliminates the extremely costly process of having to remesh the grid for every shape change desired. The program can perform a design change in a markedly reduced amount of time, a process that would traditionally involve the designer returning to the CAD model to reshape and then remesh the shapes, something that has been known to take hours, days-even weeks or months-depending upon the size of the model.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
I-STEM Ed Exemplar: Implementation of the PIRPOSAL Model
ERIC Educational Resources Information Center
Wells, John G.
2016-01-01
The opening pages of the first PIRPOSAL (Problem Identification, Ideation, Research, Potential Solutions, Optimization, Solution Evaluation, Alterations, and Learned Outcomes) article make the case that the instructional models currently used in K-12 Science, Technology, Engineering, and Mathematics (STEM) Education fall short of conveying their…
Knowledge engineering in volcanology: Practical claims and general approach
NASA Astrophysics Data System (ADS)
Pshenichny, Cyril A.
2014-10-01
Knowledge engineering, being a branch of artificial intelligence, offers a variety of methods for elicitation and structuring of knowledge in a given domain. Only a few of them (ontologies and semantic nets, event/probability trees, Bayesian belief networks and event bushes) are known to volcanologists. Meanwhile, the tasks faced by volcanology and the solutions found so far favor a much wider application of knowledge engineering, especially tools for handling dynamic knowledge. This raises some fundamental logical and mathematical problems and requires an organizational effort, but may strongly improve panel discussions, enhance decision support, optimize physical modeling and support scientific collaboration.
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
Gottschlich, Carsten; Schuhmacher, Dominic
2014-01-01
Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.
Gottschlich, Carsten; Schuhmacher, Dominic
2014-01-01
Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106
NASA Astrophysics Data System (ADS)
Verma, H. K.; Mafidar, P.
2013-09-01
In view of growing concern towards environment, power system engineers are forced to generate quality green energy. Hence the economic dispatch (ED) aims at the power generation to meet the load demand at minimum fuel cost with environmental and voltage constraints along with essential constraints on real and reactive power. The emission control which reduces the negative impact on environment is achieved by including the additional constraints in ED problem. Presently, the power system mostly operates near its stability limits, therefore with increased demand the system faces voltage problem. The bus voltages are brought within limit in the present work by placement of static var compensator (SVC) at weak bus which is identified from bus participation factor. The optimal size of SVC is determined by univariate search method. This paper presents the use of Teaching Learning based Optimization (TLBO) algorithm for voltage stable environment friendly ED problem with real and reactive power constraints. The computational effectiveness of TLBO is established through test results over particle swarm optimization (PSO) and Big Bang-Big Crunch (BB-BC) algorithms for the ED problem.
Self-Adaptive Stepsize Search Applied to Optimal Structural Design
NASA Astrophysics Data System (ADS)
Nolle, L.; Bland, J. A.
Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.
NASA Astrophysics Data System (ADS)
Massioni, Paolo; Massari, Mauro
2018-05-01
This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.
Solving the infeasible trust-region problem using approximations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott
2004-07-01
The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less
Emergence of Fundamental Limits in Spatially Distributed Dynamical Networks and Their Tradeoffs
2017-05-01
It is shown that the resulting non -convex optimization problem can be equivalently reformulated into a rank-constrained problem. We then...display a current ly valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM- YYYY) ,2. REPORT TYPE 3...robustness in distributed control and dynamical systems. Our research re- sults are highly relevant for analysis and synthesis of engineered and natural
NASA Astrophysics Data System (ADS)
Sapunkov, Ya. G.; Chelnokov, Yu. N.
2014-11-01
The problem of optimum rendezvous of a controllable spacecraft (SC) with an uncontrollable spacecraft, moving over a Keplerian elliptic orbit in the gravitational field of the Sun, is considered. Control of the SC is performed using a solar sail and low-thrust engine. For solving the problem, the regular quaternion equations of the two-body problem with the Kustaanheimo-Stiefel variables and the Pontryagin maximum principle are used. The combined integral quality functional, which characterizes energy consumption for controllable SC transition from an initial to final state and the time spent for this transition, is used as a minimized functional. The differential boundary-value optimization problems are formulated, and their first integrals are found. Examples of numerical solution of problems are presented. The paper develops the application [1-6] of quaternion regular equations with the Kustaanheimo-Stiefel variables in the space flight mechanics.
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
Multidisciplinary Optimization Branch Experience Using iSIGHT Software
NASA Technical Reports Server (NTRS)
Padula, S. L.; Korte, J. J.; Dunn, H. J.; Salas, A. O.
1999-01-01
The Multidisciplinary Optimization (MDO) Branch at NASA Langley Research Center is investigating frameworks for supporting multidisciplinary analysis and optimization research. An optimization framework call improve the design process while reducing time and costs. A framework provides software and system services to integrate computational tasks and allows the researcher to concentrate more on the application and less on the programming details. A framework also provides a common working environment and a full range of optimization tools, and so increases the productivity of multidisciplinary research teams. Finally, a framework enables staff members to develop applications for use by disciplinary experts in other organizations. Since the release of version 4.0, the MDO Branch has gained experience with the iSIGHT framework developed by Engineous Software, Inc. This paper describes experiences with four aerospace applications: (1) reusable launch vehicle sizing, (2) aerospike nozzle design, (3) low-noise rotorcraft trajectories, and (4) acoustic liner design. All applications have been successfully tested using the iSIGHT framework, except for the aerospike nozzle problem, which is in progress. Brief overviews of each problem are provided. The problem descriptions include the number and type of disciplinary codes, as well as all estimate of the multidisciplinary analysis execution time. In addition, the optimization methods, objective functions, design variables, and design constraints are described for each problem. Discussions on the experience gained and lessons learned are provided for each problem. These discussions include the advantages and disadvantages of using the iSIGHT framework for each case as well as the ease of use of various advanced features. Potential areas of improvement are identified.
NASA Astrophysics Data System (ADS)
Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.
2018-06-01
This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.
Fuel-Efficient Descent and Landing Guidance Logic for a Safe Lunar Touchdown
NASA Technical Reports Server (NTRS)
Lee, Allan Y.
2011-01-01
The landing of a crewed lunar lander on the surface of the Moon will be the climax of any Moon mission. At touchdown, the landing mechanism must absorb the load imparted on the lander due to the vertical component of the lander's touchdown velocity. Also, a large horizontal velocity must be avoided because it could cause the lander to tip over, risking the life of the crew. To be conservative, the worst-case lander's touchdown velocity is always assumed in designing the landing mechanism, making it very heavy. Fuel-optimal guidance algorithms for soft planetary landing have been studied extensively. In most of these studies, the lander is constrained to touchdown with zero velocity. With bounds imposed on the magnitude of the engine thrust, the optimal control solutions typically have a "bang-bang" thrust profile: the thrust magnitude "bangs" instantaneously between its maximum and minimum magnitudes. But the descent engine might not be able to throttle between its extremes instantaneously. There is also a concern about the acceptability of "bang-bang" control to the crew. In our study, the optimal control of a lander is formulated with a cost function that penalizes both the touchdown velocity and the fuel cost of the descent engine. In this formulation, there is not a requirement to achieve a zero touchdown velocity. Only a touchdown velocity that is consistent with the capability of the landing gear design is required. Also, since the nominal throttle level for the terminal descent sub-phase is well below the peak engine thrust, no bound on the engine thrust is used in our formulated problem. Instead of bangbang type solution, the optimal thrust generated is a continuous function of time. With this formulation, we can easily derive analytical expressions for the optimal thrust vector, touchdown velocity components, and other system variables. These expressions provide insights into the "physics" of the optimal landing and terminal descent maneuver. These insights could help engineers to achieve a better "balance" between the conflicting needs of achieving a safe touchdown velocity, a low-weight landing mechanism, low engine fuel cost, and other design goals. In comparing the computed optimal control results with the preflight landing trajectory design of the Apollo-11 mission, we noted interesting similarities between the two missions.
Kentzoglanakis, Kyriakos; Poole, Matthew
2012-01-01
In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1989-01-01
Problems associated with the sequential approach to multidisciplinary design are discussed. A blackboard model is suggested as a potential tool for implementing the multilevel decomposition approach to overcome these problems. The blackboard model serves as a global database for the solution with each discipline acting as a knowledge source for updating the solution. With this approach, it is possible for engineers to improve the coordination, communication, and cooperation in the conceptual design process, allowing them to achieve a more optimal design from an interdisciplinary standpoint.
Optimization of magnet end-winding geometry
NASA Astrophysics Data System (ADS)
Reusch, Michael F.; Weissenburger, Donald W.; Nearing, James C.
1994-03-01
A simple, almost entirely analytic, method for the optimization of stress-reduced magnet-end winding paths for ribbon-like superconducting cable is presented. This technique is based on characterization of these paths as developable surfaces, i.e., surfaces whose intrinsic geometry is flat. The method is applicable to winding mandrels of arbitrary geometry. Computational searches for optimal winding paths are easily implemented via the technique. Its application to the end configuration of cylindrical Superconducting Super Collider (SSC)-type magnets is discussed. The method may be useful for other engineering problems involving the placement of thin sheets of material.
A study of variable thrust, variable specific impulse trajectories for solar system exploration
NASA Astrophysics Data System (ADS)
Sakai, Tadashi
A study has been performed to determine the advantages and disadvantages of variable thrust and variable Isp (specific impulse) trajectories for solar system exploration. There have been several numerical research efforts for variable thrust, variable Isp, power-limited trajectory optimization problems. All of these results conclude that variable thrust, variable Isp (variable specific impulse, or VSI) engines are superior to constant thrust, constant Isp (constant specific impulse; or CSI) engines. However, most of these research efforts assume a mission from Earth to Mars, and some of them further assume that these planets are circular and coplanar. Hence they still lack the generality. This research has been conducted to answer the following questions: (1) Is a VSI engine always better than a CSI engine or a high thrust engine for any mission to any planet with any time of flight considering lower propellant mass as the sole criterion? (2) If a planetary swing-by is used for a VSI trajectory, is the fuel savings of a VSI swing-by trajectory better than that of a CSI swing-by or high thrust swing-by trajectory? To support this research, an unique, new computer-based interplanetary trajectory calculation program has been created. This program utilizes a calculus of variations algorithm to perform overall optimization of thrust, Isp, and thrust vector direction along a trajectory that minimizes fuel consumption for interplanetary travel. It is assumed that the propulsion system is power-limited, and thus the compromise between thrust and Isp is a variable to be optimized along the flight path. This program is capable of optimizing not only variable thrust trajectories but also constant thrust trajectories in 3-D space using a planetary ephemeris database. It is also capable of conducting planetary swing-bys. Using this program, various Earth-originating trajectories have been investigated and the optimized results have been compared to traditional CSI and high thrust trajectory solutions. Results show that VSI rocket engines reduce fuel requirements for any mission compared to CSI rocket engines. Fuel can be saved by applying swing-by maneuvers for VSI engines; but the effects of swing-bys due to VSI engines are smaller than that of CSI or high thrust engines.
Amber Plug-In for Protein Shop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliva, Ricardo
2004-05-10
The Amber Plug-in for ProteinShop has two main components: an AmberEngine library to compute the protein energy models, and a module to solve the energy minimization problem using an optimization algorithm in the OPTI-+ library. Together, these components allow the visualization of the protein folding process in ProteinShop. AmberEngine is a object-oriented library to compute molecular energies based on the Amber model. The main class is called ProteinEnergy. Its main interface methods are (1) "init" to initialize internal variables needed to compute the energy. (2) "eval" to evaluate the total energy given a vector of coordinates. Additional methods allow themore » user to evaluate the individual components of the energy model (bond, angle, dihedral, non-bonded-1-4, and non-bonded energies) and to obtain the energy of each individual atom. The Amber Engine library source code includes examples and test routines that illustrate the use of the library in stand alone programs. The energy minimization module uses the AmberEngine library and the nonlinear optimization library OPT++. OPT++ is open source software available under the GNU Lesser General Public License. The minimization module currently makes use of the LBFGS optimization algorithm in OPT++ to perform the energy minimization. Future releases may give the user a choice of other algorithms available in OPT++.« less
Sustainable biorefining in wastewater by engineered extreme alkaliphile Bacillus marmarensis.
Wernick, David G; Pontrelli, Sammy P; Pollock, Alexander W; Liao, James C
2016-02-01
Contamination susceptibility, water usage, and inability to utilize 5-carbon sugars and disaccharides are among the major obstacles in industrialization of sustainable biorefining. Extremophilic thermophiles and acidophiles are being researched to combat these problems, but organisms which answer all the above problems have yet to emerge. Here, we present engineering of the unexplored, extreme alkaliphile Bacillus marmarensis as a platform for new bioprocesses which meet all these challenges. With a newly developed transformation protocol and genetic tools, along with optimized RBSs and antisense RNA, we engineered B. marmarensis to produce ethanol at titers of 38 g/l and 65% yields from glucose in unsterilized media. Furthermore, ethanol titers and yields of 12 g/l and 50%, respectively, were produced from cellobiose and xylose in unsterilized seawater and algal-contaminated wastewater. As such, B. marmarensis presents a promising approach for the contamination-resistant biorefining of a wide range of carbohydrates in unsterilized, non-potable seawater.
NASA Astrophysics Data System (ADS)
Roy, Satadru
Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.
Design search and optimization in aerospace engineering.
Keane, A J; Scanlan, J P
2007-10-15
In this paper, we take a design-led perspective on the use of computational tools in the aerospace sector. We briefly review the current state-of-the-art in design search and optimization (DSO) as applied to problems from aerospace engineering, focusing on those problems that make heavy use of computational fluid dynamics (CFD). This ranges over issues of representation, optimization problem formulation and computational modelling. We then follow this with a multi-objective, multi-disciplinary example of DSO applied to civil aircraft wing design, an area where this kind of approach is becoming essential for companies to maintain their competitive edge. Our example considers the structure and weight of a transonic civil transport wing, its aerodynamic performance at cruise speed and its manufacturing costs. The goals are low drag and cost while holding weight and structural performance at acceptable levels. The constraints and performance metrics are modelled by a linked series of analysis codes, the most expensive of which is a CFD analysis of the aerodynamics using an Euler code with coupled boundary layer model. Structural strength and weight are assessed using semi-empirical schemes based on typical airframe company practice. Costing is carried out using a newly developed generative approach based on a hierarchical decomposition of the key structural elements of a typical machined and bolted wing-box assembly. To carry out the DSO process in the face of multiple competing goals, a recently developed multi-objective probability of improvement formulation is invoked along with stochastic process response surface models (Krigs). This approach both mitigates the significant run times involved in CFD computation and also provides an elegant way of balancing competing goals while still allowing the deployment of the whole range of single objective optimizers commonly available to design teams.
Using BIM Technology to Optimize the Traditional Interior Design Work Mode
NASA Astrophysics Data System (ADS)
Zhu, Ning Ke
2018-06-01
the development of BIM technology and application in the field of architecture design has produced results, but BIM technology and application in the field of interior design is still immaturity because of construction and decoration engineering separation. The article analyzes the problems that BIM technology lead to the interior design work mode optimization, from the 3D visualization work environment, real-time collaborative design mode, physical analysis design mode, information integration design mode state the application in interior design.
NASA Astrophysics Data System (ADS)
Bez'iazychnyi, V. F.
The paper is concerned with the problem of optimizing the machining of aircraft engine parts in order to satisfy certain requirements for tool wear, machining precision and surface layer characteristics, and hardening depth. A generalized multiple-objective function and its computer implementation are developed which make it possible to optimize the machining process without the use of experimental data. Alternative methods of controlling the machining process are discussed.
Mental Models and Cooperative Problem Solving with Expert Systems,
1984-09-01
THIS PAGE ( "o Do le Entera) READINSTRUCTION- REPORT DOCUMENTATION PAGE BRE COMPLETING FORM I. REPORT NUMBER 2. GOVT ACCESSION NO- I. RECIPIENT’S CATALOG...the user’s con- ceptual understanding of the basic principle of the system s problem solving processes. An experimental study is described that strongly...design :A principles that lead to the optimal user engineering of future expert systems. The central theory discussed below is that the nature of the
Williams, Perry J.; Kendall, William L.
2017-01-01
Choices in ecological research and management are the result of balancing multiple, often competing, objectives. Multi-objective optimization (MOO) is a formal decision-theoretic framework for solving multiple objective problems. MOO is used extensively in other fields including engineering, economics, and operations research. However, its application for solving ecological problems has been sparse, perhaps due to a lack of widespread understanding. Thus, our objective was to provide an accessible primer on MOO, including a review of methods common in other fields, a review of their application in ecology, and a demonstration to an applied resource management problem.A large class of methods for solving MOO problems can be separated into two strategies: modelling preferences pre-optimization (the a priori strategy), or modelling preferences post-optimization (the a posteriori strategy). The a priori strategy requires describing preferences among objectives without knowledge of how preferences affect the resulting decision. In the a posteriori strategy, the decision maker simultaneously considers a set of solutions (the Pareto optimal set) and makes a choice based on the trade-offs observed in the set. We describe several methods for modelling preferences pre-optimization, including: the bounded objective function method, the lexicographic method, and the weighted-sum method. We discuss modelling preferences post-optimization through examination of the Pareto optimal set. We applied each MOO strategy to the natural resource management problem of selecting a population target for cackling goose (Branta hutchinsii minima) abundance. Cackling geese provide food security to Native Alaskan subsistence hunters in the goose's nesting area, but depredate crops on private agricultural fields in wintering areas. We developed objective functions to represent the competing objectives related to the cackling goose population target and identified an optimal solution first using the a priori strategy, and then by examining trade-offs in the Pareto set using the a posteriori strategy. We used four approaches for selecting a final solution within the a posteriori strategy; the most common optimal solution, the most robust optimal solution, and two solutions based on maximizing a restricted portion of the Pareto set. We discuss MOO with respect to natural resource management, but MOO is sufficiently general to cover any ecological problem that contains multiple competing objectives that can be quantified using objective functions.
Parameter assessment for virtual Stackelberg game in aerodynamic shape optimization
NASA Astrophysics Data System (ADS)
Wang, Jing; Xie, Fangfang; Zheng, Yao; Zhang, Jifa
2018-05-01
In this paper, parametric studies of virtual Stackelberg game (VSG) are conducted to assess the impact of critical parameters on aerodynamic shape optimization, including design cycle, split of design variables and role assignment. Typical numerical cases, including the inverse design and drag reduction design of airfoil, have been carried out. The numerical results confirm the effectiveness and efficiency of VSG. Furthermore, the most significant parameters are identified, e.g. the increase of design cycle can improve the optimization results but it will also add computational burden. These studies will maximize the productivity of the effort in aerodynamic optimization for more complicated engineering problems, such as the multi-element airfoil and wing-body configurations.
Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert
2002-01-01
The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.
On the Use of CAD and Cartesian Methods for Aerodynamic Optimization
NASA Technical Reports Server (NTRS)
Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.
2004-01-01
The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.
Overview: Applications of numerical optimization methods to helicopter design problems
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
There are a number of helicopter design problems that are well suited to applications of numerical design optimization techniques. Adequate implementation of this technology will provide high pay-offs. There are a number of numerical optimization programs available, and there are many excellent response/performance analysis programs developed or being developed. But integration of these programs in a form that is usable in the design phase should be recognized as important. It is also necessary to attract the attention of engineers engaged in the development of analysis capabilities and to make them aware that analysis capabilities are much more powerful if integrated into design oriented codes. Frequently, the shortcoming of analysis capabilities are revealed by coupling them with an optimization code. Most of the published work has addressed problems in preliminary system design, rotor system/blade design or airframe design. Very few published results were found in acoustics, aerodynamics and control system design. Currently major efforts are focused on vibration reduction, and aerodynamics/acoustics applications appear to be growing fast. The development of a computer program system to integrate the multiple disciplines required in helicopter design with numerical optimization technique is needed. Activities in Britain, Germany and Poland are identified, but no published results from France, Italy, the USSR or Japan were found.
New knowledge-based genetic algorithm for excavator boom structural optimization
NASA Astrophysics Data System (ADS)
Hua, Haiyan; Lin, Shuwen
2014-03-01
Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.
2017-01-01
Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438
Box, Simon
2014-01-01
Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable. PMID:26064570
Gender approaches to evolutionary multi-objective optimization using pre-selection of criteria
NASA Astrophysics Data System (ADS)
Kowalczuk, Zdzisław; Białaszewski, Tomasz
2018-01-01
A novel idea to perform evolutionary computations (ECs) for solving highly dimensional multi-objective optimization (MOO) problems is proposed. Following the general idea of evolution, it is proposed that information about gender is used to distinguish between various groups of objectives and identify the (aggregate) nature of optimality of individuals (solutions). This identification is drawn out of the fitness of individuals and applied during parental crossover in the processes of evolutionary multi-objective optimization (EMOO). The article introduces the principles of the genetic-gender approach (GGA) and virtual gender approach (VGA), which are not just evolutionary techniques, but constitute a completely new rule (philosophy) for use in solving MOO tasks. The proposed approaches are validated against principal representatives of the EMOO algorithms of the state of the art in solving benchmark problems in the light of recognized EC performance criteria. The research shows the superiority of the gender approach in terms of effectiveness, reliability, transparency, intelligibility and MOO problem simplification, resulting in the great usefulness and practicability of GGA and VGA. Moreover, an important feature of GGA and VGA is that they alleviate the 'curse' of dimensionality typical of many engineering designs.
Box, Simon
2014-12-01
Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonkman, Jason; Annoni, Jennifer; Hayman, Greg
2017-01-01
This paper presents the development of FAST.Farm, a new multiphysics tool applicable to engineering problems in research and industry involving wind farm performance and cost optimization that is needed to address the current underperformance, failures, and expenses plaguing the wind industry. Achieving wind cost-of-energy targets - which requires improvements in wind farm performance and reliability, together with reduced uncertainty and expenditures - has been eluded by the complicated nature of the wind farm design problem, especially the sophisticated interaction between atmospheric phenomena and wake dynamics and array effects. FAST.Farm aims to balance the need for accurate modeling of the relevantmore » physics for predicting power performance and loads while maintaining low computational cost to support a highly iterative and probabilistic design process and system-wide optimization. FAST.Farm makes use of FAST to model the aero-hydro-servo-elastics of distinct turbines in the wind farm, and it is based on some of the principles of the Dynamic Wake Meandering (DWM) model, but avoids many of the limitations of existing DWM implementations.« less
Hybrid intelligent optimization methods for engineering problems
NASA Astrophysics Data System (ADS)
Pehlivanoglu, Yasin Volkan
The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Price, D. Marvin
1991-01-01
Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.
When more of the same is better
NASA Astrophysics Data System (ADS)
Fontanari, José F.
2016-01-01
Problem solving (e.g., drug design, traffic engineering, software development) by task forces represents a substantial portion of the economy of developed countries. Here we use an agent-based model of cooperative problem-solving systems to study the influence of diversity on the performance of a task force. We assume that agents cooperate by exchanging information on their partial success and use that information to imitate the more successful agent in the system —the model. The agents differ only in their propensities to copy the model. We find that, for easy tasks, the optimal organization is a homogeneous system composed of agents with the highest possible copy propensities. For difficult tasks, we find that diversity can prevent the system from being trapped in sub-optimal solutions. However, when the system size is adjusted to maximize the performance the homogeneous systems outperform the heterogeneous systems, i.e., for optimal performance, sameness should be preferred to diversity.
NASA Astrophysics Data System (ADS)
Liu, Shibing; Yang, Bingen
2017-10-01
Flexible multistage rotor systems with water-lubricated rubber bearings (WLRBs) have a variety of engineering applications. Filling a technical gap in the literature, this effort proposes a method of optimal bearing placement that minimizes the vibration amplitude of a WLRB-supported flexible rotor system with a minimum number of bearings. In the development, a new model of WLRBs and a distributed transfer function formulation are used to define a mixed continuous-and-discrete optimization problem. To deal with the case of uncertain number of WLRBs in rotor design, a virtual bearing method is devised. Solution of the optimization problem by a real-coded genetic algorithm yields the locations and lengths of water-lubricated rubber bearings, by which the prescribed operational requirements for the rotor system are satisfied. The proposed method is applicable either to preliminary design of a new rotor system with the number of bearings unforeknown or to redesign of an existing rotor system with a given number of bearings. Numerical examples show that the proposed optimal bearing placement is efficient, accurate and versatile in different design cases.
Resolvent analysis of shear flows using One-Way Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Rigas, Georgios; Schmidt, Oliver; Towne, Aaron; Colonius, Tim
2017-11-01
For three-dimensional flows, questions of stability, receptivity, secondary flows, and coherent structures require the solution of large partial-derivative eigenvalue problems. Reduced-order approximations are thus required for engineering prediction since these problems are often computationally intractable or prohibitively expensive. For spatially slowly evolving flows, such as jets and boundary layers, the One-Way Navier-Stokes (OWNS) equations permit a fast spatial marching procedure that results in a huge reduction in computational cost. Here, an adjoint-based optimization framework is proposed and demonstrated for calculating optimal boundary conditions and optimal volumetric forcing. The corresponding optimal response modes are validated against modes obtained in terms of global resolvent analysis. For laminar base flows, the optimal modes reveal modal and non-modal transition mechanisms. For turbulent base flows, they predict the evolution of coherent structures in a statistical sense. Results from the application of the method to three-dimensional laminar wall-bounded flows and turbulent jets will be presented. This research was supported by the Office of Naval Research (N00014-16-1-2445) and Boeing Company (CT-BA-GTA-1).
Essays on variational approximation techniques for stochastic optimization problems
NASA Astrophysics Data System (ADS)
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.
NASA Technical Reports Server (NTRS)
Rubbert, P. E.
1978-01-01
The commercial airplane builder's viewpoint on the important issues involved in the development of improved computational aerodynamics tools such as powerful computers optimized for fluid flow problems is presented. The primary user of computational aerodynamics in a commercial aircraft company is the design engineer who is concerned with solving practical engineering problems. From his viewpoint, the development of program interfaces and pre-and post-processing capability for new computational methods is just as important as the algorithms and machine architecture. As more and more details of the entire flow field are computed, the visibility of the output data becomes a major problem which is then doubled when a design capability is added. The user must be able to see, understand, and interpret the results calculated. Enormous costs are expanded because of the need to work with programs having only primitive user interfaces.
Structural Optimization Methodology for Rotating Disks of Aircraft Engines
NASA Technical Reports Server (NTRS)
Armand, Sasan C.
1995-01-01
In support of the preliminary evaluation of various engine technologies, a methodology has been developed for structurally designing the rotating disks of an aircraft engine. The structural design methodology, along with a previously derived methodology for predicting low-cycle fatigue life, was implemented in a computer program. An interface computer program was also developed that gathers the required data from a flowpath analysis program (WATE) being used at NASA Lewis. The computer program developed for this study requires minimum interaction with the user, thus allowing engineers with varying backgrounds in aeropropulsion to successfully execute it. The stress analysis portion of the methodology and the computer program were verified by employing the finite element analysis method. The 10th- stage, high-pressure-compressor disk of the Energy Efficient Engine Program (E3) engine was used to verify the stress analysis; the differences between the stresses and displacements obtained from the computer program developed for this study and from the finite element analysis were all below 3 percent for the problem solved. The computer program developed for this study was employed to structurally optimize the rotating disks of the E3 high-pressure compressor. The rotating disks designed by the computer program in this study were approximately 26 percent lighter than calculated from the E3 drawings. The methodology is presented herein.
Active stability augmentation of large space structures: A stochastic control problem
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1987-01-01
A problem in SCOLE is that of slewing an offset antenna on a long flexible beam-like truss attached to the space shuttle, with rather stringent pointing accuracy requirements. The relevant methodology aspects in robust feedback-control design for stability augmentation of the beam using on-board sensors is examined. It is framed as a stochastic control problem, boundary control of a distributed parameter system described by partial differential equations. While the framework is mathematical, the emphasis is still on an engineering solution. An abstract mathematical formulation is developed as a nonlinear wave equation in a Hilbert space. That the system is controllable is shown and a feedback control law that is robust in the sense that it does not require quantitative knowledge of system parameters is developed. The stochastic control problem that arises in instrumenting this law using appropriate sensors is treated. Using an engineering first approximation which is valid for small damping, formulas for optimal choice of the control gain are developed.
Optimizing Dynamical Network Structure for Pinning Control
NASA Astrophysics Data System (ADS)
Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo
2016-04-01
Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.
Optimal Sensor Layouts in Underwater Locomotory Systems
NASA Astrophysics Data System (ADS)
Colvert, Brendan; Kanso, Eva
2015-11-01
Retrieving and understanding global flow characteristics from local sensory measurements is a challenging but extremely relevant problem in fields such as defense, robotics, and biomimetics. It is an inverse problem in that the goal is to translate local information into global flow properties. In this talk we present techniques for optimization of sensory layouts within the context of an idealized underwater locomotory system. Using techniques from fluid mechanics and control theory, we show that, under certain conditions, local measurements can inform the submerged body about its orientation relative to the ambient flow, and allow it to recognize local properties of shear flows. We conclude by commenting on the relevance of these findings to underwater navigation in engineered systems and live organisms.
Back analysis of geomechanical parameters in underground engineering using artificial bee colony.
Zhu, Changxing; Zhao, Hongbo; Zhao, Ming
2014-01-01
Accurate geomechanical parameters are critical in tunneling excavation, design, and supporting. In this paper, a displacements back analysis based on artificial bee colony (ABC) algorithm is proposed to identify geomechanical parameters from monitored displacements. ABC was used as global optimal algorithm to search the unknown geomechanical parameters for the problem with analytical solution. To the problem without analytical solution, optimal back analysis is time-consuming, and least square support vector machine (LSSVM) was used to build the relationship between unknown geomechanical parameters and displacement and improve the efficiency of back analysis. The proposed method was applied to a tunnel with analytical solution and a tunnel without analytical solution. The results show the proposed method is feasible.
Optimization design of hydroturbine rotors according to the efficiency-strength criteria
NASA Astrophysics Data System (ADS)
Bannikov, D. V.; Yesipov, D. V.; Cherny, S. G.; Chirkov, D. V.
2010-12-01
The hydroturbine runner designing [1] is optimized by efficient methods for calculation of head loss in entire flow-through part of the turbine and deformation state of the blade. Energy losses are found at modelling of the spatial turbulent flow and engineering semi-empirical formulae. State of deformation is determined from the solution of the linear problem of elasticity for the isolated blade at hydrodynamic pressure with the method of boundary elements. With the use of the proposed system, the problem of the turbine runner design with the capacity of 640 MW providing the preset dependence of efficiency on the turbine work mode (efficiency criterion) is solved. The arising stresses do not exceed the critical value (strength criterion).
New Approaches to HSCT Multidisciplinary Design and Optimization
NASA Technical Reports Server (NTRS)
Schrage, D. P.; Craig, J. I.; Fulton, R. E.; Mistree, F.
1996-01-01
The successful development of a capable and economically viable high speed civil transport (HSCT) is perhaps one of the most challenging tasks in aeronautics for the next two decades. At its heart it is fundamentally the design of a complex engineered system that has significant societal, environmental and political impacts. As such it presents a formidable challenge to all areas of aeronautics, and it is therefore a particularly appropriate subject for research in multidisciplinary design and optimization (MDO). In fact, it is starkly clear that without the availability of powerful and versatile multidisciplinary design, analysis and optimization methods, the design, construction and operation of im HSCT simply cannot be achieved. The present research project is focused on the development and evaluation of MDO methods that, while broader and more general in scope, are particularly appropriate to the HSCT design problem. The research aims to not only develop the basic methods but also to apply them to relevant examples from the NASA HSCT R&D effort. The research involves a three year effort aimed first at the HSCT MDO problem description, next the development of the problem, and finally a solution to a significant portion of the problem.
NASA Astrophysics Data System (ADS)
Qi, Bin; Guo, Linli; Zhang, Zhixian
2016-07-01
Space life science and life support engineering are prominent problems in manned deep space exploration mission. Some typical problems are discussed in this paper, including long-term life support problem, physiological effect and defense of varying extraterrestrial environment. The causes of these problems are developed for these problems. To solve these problems, research on space life science and space medical-engineering should be conducted. In the aspect of space life science, the study of space gravity biology should focus on character of physiological effect in long term zero gravity, co-regulation of physiological systems, impact on stem cells in space, etc. The study of space radiation biology should focus on target effect and non-target effect of radiation, carcinogenicity of radiation, spread of radiation damage in life system, etc. The study of basic biology of space life support system should focus on theoretical basis and simulating mode of constructing the life support system, filtration and combination of species, regulation and optimization method of life support system, etc. In the aspect of space medical-engineering, the study of bio-regenerative life support technology should focus on plants cultivation technology, animal-protein production technology, waste treatment technology, etc. The study of varying gravity defense technology should focus on biological and medical measures to defend varying gravity effect, generation and evaluation of artificial gravity, etc. The study of extraterrestrial environment defense technology should focus on risk evaluation of radiation, monitoring and defending of radiation, compound prevention and removal technology of dust, etc. At last, a case of manned lunar base is analyzed, in which the effective schemes of life support system, defense of varying gravity, defense of extraterrestrial environment are advanced respectively. The points in this paper can be used as references for intensive study on key technologies.
Solving fuzzy shortest path problem by genetic algorithm
NASA Astrophysics Data System (ADS)
Syarif, A.; Muludi, K.; Adrian, R.; Gen, M.
2018-03-01
Shortest Path Problem (SPP) is known as one of well-studied fields in the area Operations Research and Mathematical Optimization. It has been applied for many engineering and management designs. The objective is usually to determine path(s) in the network with minimum total cost or traveling time. In the past, the cost value for each arc was usually assigned or estimated as a deteministic value. For some specific real world applications, however, it is often difficult to determine the cost value properly. One way of handling such uncertainty in decision making is by introducing fuzzy approach. With this situation, it will become difficult to solve the problem optimally. This paper presents the investigations on the application of Genetic Algorithm (GA) to a new SPP model in which the cost values are represented as Triangular Fuzzy Number (TFN). We adopts the concept of ranking fuzzy numbers to determine how good the solutions. Here, by giving his/her degree value, the decision maker can determine the range of objective value. This would be very valuable for decision support system in the real world applications.Simulation experiments were carried out by modifying several test problems with 10-25 nodes. It is noted that the proposed approach is capable attaining a good solution with different degree of optimism for the tested problems.
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2018-03-01
The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.
Intrinsic optimization using stochastic nanomagnets
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-01-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets. PMID:28295053
Intrinsic optimization using stochastic nanomagnets
NASA Astrophysics Data System (ADS)
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-03-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets.
NASA Astrophysics Data System (ADS)
Modgil, Girish A.
Gas turbine engines for aerospace applications have evolved dramatically over the last 50 years through the constant pursuit for better specific fuel consumption, higher thrust-to-weight ratio, lower noise and emissions all while maintaining reliability and affordability. An important step in enabling these improvements is a forced response aeromechanics analysis involving structural dynamics and aerodynamics of the turbine. It is well documented that forced response vibration is a very critical problem in aircraft engine design, causing High Cycle Fatigue (HCF). Pushing the envelope on engine design has led to increased forced response problems and subsequently an increased risk of HCF failure. Forced response analysis is used to assess design feasibility of turbine blades for HCF using a material limit boundary set by the Goodman Diagram envelope that combines the effects of steady and vibratory stresses. Forced response analysis is computationally expensive, time consuming and requires multi-domain experts to finalize a result. As a consequence, high-fidelity aeromechanics analysis is performed deterministically and is usually done at the end of the blade design process when it is very costly to make significant changes to geometry or aerodynamic design. To address uncertainties in the system (engine operating point, temperature distribution, mistuning, etc.) and variability in material properties, designers apply conservative safety factors in the traditional deterministic approach, which leads to bulky designs. Moreover, using a deterministic approach does not provide a calculated risk of HCF failure. This thesis describes a process that begins with the optimal aerodynamic design of a turbomachinery blade developed using surrogate models of high-fidelity analyses. The resulting optimal blade undergoes probabilistic evaluation to generate aeromechanics results that provide a calculated likelihood of failure from HCF. An existing Rolls-Royce High Work Single Stage (HWSS) turbine blisk provides a baseline to demonstrate the process. The generalized polynomial chaos (gPC) toolbox which was developed includes sampling methods and constructs polynomial approximations. The toolbox provides not only the means for uncertainty quantification of the final blade design, but also facilitates construction of the surrogate models used for the blade optimization. This paper shows that gPC , with a small number of samples, achieves very fast rates of convergence and high accuracy in describing probability distributions without loss of detail in the tails . First, an optimization problem maximizes stage efficiency using turbine aerodynamic design rules as constraints; the function evaluations for this optimization are surrogate models from detailed 3D steady Computational Fluid Dynamics (CFD) analyses. The resulting optimal shape provides a starting point for the 3D high-fidelity aeromechanics (unsteady CFD and 3D Finite Element Analysis (FEA)) UQ study assuming three uncertain input parameters. This investigation seeks to find the steady and vibratory stresses associated with the first torsion mode for the HWSS turbine blisk near maximum operating speed of the engine. Using gPC to provide uncertainty estimates of the steady and vibratory stresses enables the creation of a Probabilistic Goodman Diagram, which - to the authors' best knowledge - is the first of its kind using high fidelity aeromechanics for turbomachinery blades. The Probabilistic Goodman Diagram enables turbine blade designers to make more informed design decisions and it allows the aeromechanics expert to assess quantitatively the risk associated with HCF for any mode crossing based on high fidelity simulations.
New Approaches to HSCT Multidisciplinary Design and Optimization
NASA Technical Reports Server (NTRS)
Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh
1999-01-01
New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.
Development of a Composite Tailoring Procedure for Airplane Wings
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi
2000-01-01
The quest for finding optimum solutions to engineering problems has existed for a long time. In modern times, the development of optimization as a branch of applied mathematics is regarded to have originated in the works of Newton, Bernoulli and Euler. Venkayya has presented a historical perspective on optimization in [1]. The term 'optimization' is defined by Ashley [2] as a procedure "...which attempts to choose the variables in a design process so as formally to achieve the best value of some performance index while not violating any of the associated conditions or constraints". Ashley presented an extensive review of practical applications of optimization in the aeronautical field till about 1980 [2]. It was noted that there existed an enormous amount of published literature in the field of optimization, but its practical applications in industry were very limited. Over the past 15 years, though, optimization has been widely applied to address practical problems in aerospace design [3-5]. The design of high performance aerospace systems is a complex task. It involves the integration of several disciplines such as aerodynamics, structural analysis, dynamics, and aeroelasticity. The problem involves multiple objectives and constraints pertaining to the design criteria associated with each of these disciplines. Many important trade-offs exist between the parameters involved which are used to define the different disciplines. Therefore, the development of multidisciplinary design optimization (MDO) techniques, in which different disciplines and design parameters are coupled into a closed loop numerical procedure, seems appropriate to address such a complex problem. The importance of MDO in successful design of aerospace systems has been long recognized. Recent developments in this field have been surveyed by Sobieszczanski-Sobieski and Haftka [6].
Systems metabolic engineering of microorganisms for natural and non-natural chemicals.
Lee, Jeong Wook; Na, Dokyun; Park, Jong Myoung; Lee, Joungmin; Choi, Sol; Lee, Sang Yup
2012-05-17
Growing concerns over limited fossil resources and associated environmental problems are motivating the development of sustainable processes for the production of chemicals, fuels and materials from renewable resources. Metabolic engineering is a key enabling technology for transforming microorganisms into efficient cell factories for these compounds. Systems metabolic engineering, which incorporates the concepts and techniques of systems biology, synthetic biology and evolutionary engineering at the systems level, offers a conceptual and technological framework to speed the creation of new metabolic enzymes and pathways or the modification of existing pathways for the optimal production of desired products. Here we discuss the general strategies of systems metabolic engineering and examples of its application and offer insights as to when and how each of the different strategies should be used. Finally, we highlight the limitations and challenges to be overcome for the systems metabolic engineering of microorganisms at more advanced levels.
NASA Astrophysics Data System (ADS)
Zhao, Yingru; Chen, Jincan
A theoretical modeling approach is presented, which describes the behavior of a typical fuel cell-heat engine hybrid system in steady-state operating condition based on an existing solid oxide fuel cell model, to provide useful fundamental design characteristics as well as potential critical problems. The different sources of irreversible losses, such as the electrochemical reaction, electric resistances, finite-rate heat transfer between the fuel cell and the heat engine, and heat-leak from the fuel cell to the environment are specified and investigated. Energy and entropy analyses are used to indicate the multi-irreversible losses and to assess the work potentials of the hybrid system. Expressions for the power output and efficiency of the hybrid system are derived and the performance characteristics of the system are presented and discussed in detail. The effects of the design parameters and operating conditions on the system performance are studied numerically. It is found that there exist certain optimum criteria for some important parameters. The results obtained here may provide a theoretical basis for both the optimal design and operation of real fuel cell-heat engine hybrid systems. This new approach can be easily extended to other fuel cell hybrid systems to develop irreversible models suitable for the investigation and optimization of similar energy conversion settings and electrochemistry systems.
The Applied Mathematics for Power Systems (AMPS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael
2012-07-24
Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to accelerate over the coming years, bringing the disruptive challenge of complexity, but also opportunities to deliver unprecedented efficiency and reliability. Our Applied Mathematics for Power Systems (AMPS) Center will discover, enable, and solve emerging mathematics challenges arising in power systems and, more generally, in complex engineered networks. We will develop foundational applied mathematics resulting in rigorous algorithms and simulation toolboxesmore » for modern and future engineered networks. The AMPS Center deconstruction/reconstruction approach 'deconstructs' complex networks into sub-problems within non-separable spatiotemporal scales, a missing step in 20th century modeling of engineered networks. These sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their boundaries into more general mathematical descriptions of complex engineered networks where important new questions are formulated and attacked. These two steps, iterated multiple times, will bridge the growing chasm between the legacy power grid and its future as a complex engineered network.« less
NASA Astrophysics Data System (ADS)
Lamont, L. A.; Chaar, L.; Toms, C.
2010-03-01
Interactive learning is beneficial to students in that it allows the continual development and testing of many skills. An interactive approach enables students to improve their technical capabilities, as well as developing both verbal and written communicative ability. Problem solving and communication skills are vital for engineering students; in the workplace they will be required to communicate with people of varying technical abilities and from different linguistic and engineering backgrounds. In this paper, a case study is presented that discusses how the traditional method of teaching control systems can be improved. 'Control systems' is a complex engineering topic requiring students to process an extended amount of mathematical formulae. MATLAB software, which enables students to interactively compare a range of possible combinations and analyse the optimal solution, is used to this end. It was found that students became more enthusiastic and interested when given ownership of their learning objectives. As well as improving the students' technical knowledge, other important engineering skills are also improved by introducing an interactive method of teaching.
Deterministic Design Optimization of Structures in OpenMDAO Framework
NASA Technical Reports Server (NTRS)
Coroneos, Rula M.; Pai, Shantaram S.
2012-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.
NASA Astrophysics Data System (ADS)
Bouter, Anton; Alderliesten, Tanja; Bosman, Peter A. N.
2017-02-01
Taking a multi-objective optimization approach to deformable image registration has recently gained attention, because such an approach removes the requirement of manually tuning the weights of all the involved objectives. Especially for problems that require large complex deformations, this is a non-trivial task. From the resulting Pareto set of solutions one can then much more insightfully select a registration outcome that is most suitable for the problem at hand. To serve as an internal optimization engine, currently used multi-objective algorithms are competent, but rather inefficient. In this paper we largely improve upon this by introducing a multi-objective real-valued adaptation of the recently introduced Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) for discrete optimization. In this work, GOMEA is tailored specifically to the problem of deformable image registration to obtain substantially improved efficiency. This improvement is achieved by exploiting a key strength of GOMEA: iteratively improving small parts of solutions, allowing to faster exploit the impact of such updates on the objectives at hand through partial evaluations. We performed experiments on three registration problems. In particular, an artificial problem containing a disappearing structure, a pair of pre- and post-operative breast CT scans, and a pair of breast MRI scans acquired in prone and supine position were considered. Results show that compared to the previously used evolutionary algorithm, GOMEA obtains a speed-up of up to a factor of 1600 on the tested registration problems while achieving registration outcomes of similar quality.
Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N
2006-12-01
Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.
A Simple Method for Amplifying RNA Targets (SMART)
McCalla, Stephanie E.; Ong, Carmichael; Sarma, Aartik; Opal, Steven M.; Artenstein, Andrew W.; Tripathi, Anubhav
2012-01-01
We present a novel and simple method for amplifying RNA targets (named by its acronym, SMART), and for detection, using engineered amplification probes that overcome existing limitations of current RNA-based technologies. This system amplifies and detects optimal engineered ssDNA probes that hybridize to target RNA. The amplifiable probe-target RNA complex is captured on magnetic beads using a sequence-specific capture probe and is separated from unbound probe using a novel microfluidic technique. Hybridization sequences are not constrained as they are in conventional target-amplification reactions such as nucleic acid sequence amplification (NASBA). Our engineered ssDNA probe was amplified both off-chip and in a microchip reservoir at the end of the separation microchannel using isothermal NASBA. Optimal solution conditions for ssDNA amplification were investigated. Although KCl and MgCl2 are typically found in NASBA reactions, replacing 70 mmol/L of the 82 mmol/L total chloride ions with acetate resulted in optimal reaction conditions, particularly for low but clinically relevant probe concentrations (≤100 fmol/L). With the optimal probe design and solution conditions, we also successfully removed the initial heating step of NASBA, thus achieving a true isothermal reaction. The SMART assay using a synthetic model influenza DNA target sequence served as a fundamental demonstration of the efficacy of the capture and microfluidic separation system, thus bridging our system to a clinically relevant detection problem. PMID:22691910
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
2013-01-01
Background Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. Methods In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Results Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies. PMID:23368729
Ren, Shaogang; Zeng, Bo; Qian, Xiaoning
2013-01-01
Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies.
Engineering calculations for communications satellite systems planning
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Levis, C. A.; Mount-Campbell, C.; Gonsalvez, D. J.; Wang, C. W.; Yamamura, Y.
1985-01-01
Computer-based techniques for optimizing communications-satellite orbit and frequency assignments are discussed. A gradient-search code was tested against a BSS scenario derived from the RARC-83 data. Improvement was obtained, but each iteration requires about 50 minutes of IBM-3081 CPU time. Gradient-search experiments on a small FSS test problem, consisting of a single service area served by 8 satellites, showed quickest convergence when the satellites were all initially placed near the center of the available orbital arc with moderate spacing. A transformation technique is proposed for investigating the surface topography of the objective function used in the gradient-search method. A new synthesis approach is based on transforming single-entry interference constraints into corresponding constraints on satellite spacings. These constraints are used with linear objective functions to formulate the co-channel orbital assignment task as a linear-programming (LP) problem or mixed integer programming (MIP) problem. Globally optimal solutions are always found with the MIP problems, but not necessarily with the LP problems. The MIP solutions can be used to evaluate the quality of the LP solutions. The initial results are very encouraging.
Multidisciplinary design integration system for a supersonic transport aircraft
NASA Technical Reports Server (NTRS)
Dovi, A. R.; Wrenn, G. A.; Barthelemy, J.-F. M.; Coen, P. G.; Hall, L. E.
1992-01-01
An aircraft preliminary design system which provides the multidisciplinary communications and couplings between several engineering disciplines is described. A primary benefit of this system is to demonstrate advanced technology multidisciplinary design integration methodologies. The current version includes the disciplines of aerodynamics and structures. Contributing engineering disciplines are coupled using the Global Sensitivity Equation approach to influence the global design optimization problem. A high speed civil transport configuration is used for configuration trade studies. Forty four independent design variables are used to control the cross-sectional areas of wing rib and spar caps and the thicknesses of wingskincover panels. A total of 300 stress, strain, buckling and displacement behavioral constraints and minimum gages on the design variables were used to optimize the idealized wing structure. The goal of the designs to resize the wing cover panels and internal structure for minimum mass.
An application of object-oriented knowledge representation to engineering expert systems
NASA Technical Reports Server (NTRS)
Logie, D. S.; Kamil, H.; Umaretiya, J. R.
1990-01-01
The paper describes an object-oriented knowledge representation and its application to engineering expert systems. The object-oriented approach promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects and organized by defining relationships between the objects. An Object Representation Language (ORL) was implemented as a tool for building and manipulating the object base. Rule-based knowledge representation is then used to simulate engineering design reasoning. Using a common object base, very large expert systems can be developed, comprised of small, individually processed, rule sets. The integration of these two schemes makes it easier to develop practical engineering expert systems. The general approach to applying this technology to the domain of the finite element analysis, design, and optimization of aerospace structures is discussed.
A reduced successive quadratic programming strategy for errors-in-variables estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.
Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less
Sustainable biorefining in wastewater by engineered extreme alkaliphile Bacillus marmarensis
Wernick, David G.; Pontrelli, Sammy P.; Pollock, Alexander W.; Liao, James C.
2016-01-01
Contamination susceptibility, water usage, and inability to utilize 5-carbon sugars and disaccharides are among the major obstacles in industrialization of sustainable biorefining. Extremophilic thermophiles and acidophiles are being researched to combat these problems, but organisms which answer all the above problems have yet to emerge. Here, we present engineering of the unexplored, extreme alkaliphile Bacillus marmarensis as a platform for new bioprocesses which meet all these challenges. With a newly developed transformation protocol and genetic tools, along with optimized RBSs and antisense RNA, we engineered B. marmarensis to produce ethanol at titers of 38 g/l and 65% yields from glucose in unsterilized media. Furthermore, ethanol titers and yields of 12 g/l and 50%, respectively, were produced from cellobiose and xylose in unsterilized seawater and algal-contaminated wastewater. As such, B. marmarensis presents a promising approach for the contamination-resistant biorefining of a wide range of carbohydrates in unsterilized, non-potable seawater. PMID:26831574
NASA Astrophysics Data System (ADS)
Takemiya, Tetsushi
In modern aerospace engineering, the physics-based computational design method is becoming more important, as it is more efficient than experiments and because it is more suitable in designing new types of aircraft (e.g., unmanned aerial vehicles or supersonic business jets) than the conventional design method, which heavily relies on historical data. To enhance the reliability of the physics-based computational design method, researchers have made tremendous efforts to improve the fidelity of models. However, high-fidelity models require longer computational time, so the advantage of efficiency is partially lost. This problem has been overcome with the development of variable fidelity optimization (VFO). In VFO, different fidelity models are simultaneously employed in order to improve the speed and the accuracy of convergence in an optimization process. Among the various types of VFO methods, one of the most promising methods is the approximation management framework (AMF). In the AMF, objective and constraint functions of a low-fidelity model are scaled at a design point so that the scaled functions, which are referred to as "surrogate functions," match those of a high-fidelity model. Since scaling functions and the low-fidelity model constitutes surrogate functions, evaluating the surrogate functions is faster than evaluating the high-fidelity model. Therefore, in the optimization process, in which gradient-based optimization is implemented and thus many function calls are required, the surrogate functions are used instead of the high-fidelity model to obtain a new design point. The best feature of the AMF is that it may converge to a local optimum of the high-fidelity model in much less computational time than the high-fidelity model. However, through literature surveys and implementations of the AMF, the author xx found that (1) the AMF is very vulnerable when the computational analysis models have numerical noise, which is very common in high-fidelity models, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite differentiation (FD) method, and then, the Robust AMF is implemented along with the sequential quadratic programming (SQP) optimization method with only high-fidelity models. The proposed AD method computes derivatives more accurately and faster than the FD method, and the Robust AMF successfully optimizes shapes of the airfoil and the wing in a much shorter time than SQP with only high-fidelity models. These results clearly show the effectiveness of the Robust AMF. Finally, the feasibility of reducing computational time for calculating derivatives and the necessity of AMF with an optimum design point always in the feasible region are discussed as future work.
Zijah, Vahid; Salehi, Roya; Aghazadeh, Marziyeh; Samiei, Mohammad; Alizadeh, Effat; Davaran, Soodabeh
2017-06-01
Tissue engineering has emerged as a potential therapeutic option for dental problems in recent years. One of the policies in tissue engineering is to use both scaffolds and additive factors for enhancing cell responses. This study aims to evaluate and compare the effect of three types of biofactors on poly-caprolactone-poly-ethylene glycol-poly caprolactone (PCL-PEG-PCL) nanofibrous scaffold on human dental pulp stem cell (hDPSCs) engineering. The PCL-PEG-PCL copolymer was synthesized with ring opening polymerization method, and its nanofiber scaffold was prepared by electrospinning method. Nanofibrous scaffold-seeded hDPSCs were treated with sodium fluoride (NaF), melanocyte-stimulating hormone (MSH), or simvastatin (SIM). Non-treated nanofiber seeded cells were utilized as control. The viability, biocompatibility, adhesion, proliferation rate, morphology, osteo/odontogenic potential, and the expression of tissue-specific genes were studied. The results showed that significant higher results demonstrated significant higher adhesive behavior, viability, alizarin red activity, and dentin specific gene expression in MSH- and SIM-treated cells (p < 0.05). This study is unique; in that, it compares the effects of different treatments for optimization of dental tissue engineering.
Co-Optimization of Fuels and Engines | Transportation Research | NREL
Co-Optimization of Fuels and Engines Co-Optimization of Fuels and Engines Photo of silver sedan in ), eight other national laboratories, and industry on the Co-Optimization of Fuels & Engines (Co-Optima research activities and accomplishments of the Co-Optima initiative in the Co-Optimization of Fuels &
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
Quantum speedup in solving the maximal-clique problem
NASA Astrophysics Data System (ADS)
Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang
2018-03-01
The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.
NASA Astrophysics Data System (ADS)
Moghaddam, Kamran S.; Usher, John S.
2011-07-01
In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.
Turbomachinery Airfoil Design Optimization Using Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit
NASA Technical Reports Server (NTRS)
Calise, A. J.; Flandro, G. A.; Corban, J. E.
1989-01-01
Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.
A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I
Adaptive critic learning techniques for engine torque and air-fuel ratio control.
Liu, Derong; Javaherian, Hossein; Kovalenko, Olesia; Huang, Ting
2008-08-01
A new approach for engine calibration and control is proposed. In this paper, we present our research results on the implementation of adaptive critic designs for self-learning control of automotive engines. A class of adaptive critic designs that can be classified as (model-free) action-dependent heuristic dynamic programming is used in this research project. The goals of the present learning control design for automotive engines include improved performance, reduced emissions, and maintained optimum performance under various operating conditions. Using the data from a test vehicle with a V8 engine, we developed a neural network model of the engine and neural network controllers based on the idea of approximate dynamic programming to achieve optimal control. We have developed and simulated self-learning neural network controllers for both engine torque (TRQ) and exhaust air-fuel ratio (AFR) control. The goal of TRQ control and AFR control is to track the commanded values. For both control problems, excellent neural network controller transient performance has been achieved.
A modified conjugate gradient coefficient with inexact line search for unconstrained optimization
NASA Astrophysics Data System (ADS)
Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa
2016-11-01
Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.
DE and NLP Based QPLS Algorithm
NASA Astrophysics Data System (ADS)
Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo
As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.
An optimal control method for fluid structure interaction systems via adjoint boundary pressure
NASA Astrophysics Data System (ADS)
Chirco, L.; Da Vià, R.; Manservisi, S.
2017-11-01
In recent year, in spite of the computational complexity, Fluid-structure interaction (FSI) problems have been widely studied due to their applicability in science and engineering. Fluid-structure interaction systems consist of one or more solid structures that deform by interacting with a surrounding fluid flow. FSI simulations evaluate the tensional state of the mechanical component and take into account the effects of the solid deformations on the motion of the interior fluids. The inverse FSI problem can be described as the achievement of a certain objective by changing some design parameters such as forces, boundary conditions and geometrical domain shapes. In this paper we would like to study the inverse FSI problem by using an optimal control approach. In particular we propose a pressure boundary optimal control method based on Lagrangian multipliers and adjoint variables. The objective is the minimization of a solid domain displacement matching functional obtained by finding the optimal pressure on the inlet boundary. The optimality system is derived from the first order necessary conditions by taking the Fréchet derivatives of the Lagrangian with respect to all the variables involved. The optimal solution is then obtained through a standard steepest descent algorithm applied to the optimality system. The approach presented in this work is general and could be used to assess other objective functionals and controls. In order to support the proposed approach we perform a few numerical tests where the fluid pressure on the domain inlet controls the displacement that occurs in a well defined region of the solid domain.
A predictive machine learning approach for microstructure optimization and materials design
NASA Astrophysics Data System (ADS)
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; Agrawal, Ankit; Sundararaghavan, Veera; Choudhary, Alok
2015-06-01
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. Experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.
Carnot cycle at finite power: attainability of maximal efficiency.
Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G
2013-08-02
We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power.
Service Modeling for Service Engineering
NASA Astrophysics Data System (ADS)
Shimomura, Yoshiki; Tomiyama, Tetsuo
Intensification of service and knowledge contents within product life cycles is considered crucial for dematerialization, in particular, to design optimal product-service systems from the viewpoint of environmentally conscious design and manufacturing in advanced post industrial societies. In addition to the environmental limitations, we are facing social limitations which include limitations of markets to accept increasing numbers of mass-produced artifacts and such environmental and social limitations are restraining economic growth. To attack and remove these problems, we need to reconsider the current mass production paradigm and to make products have more added values largely from knowledge and service contents to compensate volume reduction under the concept of dematerialization. Namely, dematerialization of products needs to enrich service contents. However, service was mainly discussed within marketing and has been mostly neglected within traditional engineering. Therefore, we need new engineering methods to look at services, rather than just functions, called "Service Engineering." To establish service engineering, this paper proposes a modeling technique of service.
Expert System for Automated Design Synthesis
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Barthelemy, Jean-Francois M.
1987-01-01
Expert-system computer program EXADS developed to aid users of Automated Design Synthesis (ADS) general-purpose optimization program. EXADS aids engineer in determining best combination based on knowledge of specific problem and expert knowledge stored in knowledge base. Available in two interactive machine versions. IBM PC version (LAR-13687) written in IQ-LISP. DEC VAX version (LAR-13688) written in Franz-LISP.
Problem decomposition by mutual information and force-based clustering
NASA Astrophysics Data System (ADS)
Otero, Richard Edward
The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.
Further Development, Support and Enhancement of CONDUIT
NASA Technical Reports Server (NTRS)
Veronica, Moldoveanu; Levine, William S.
1999-01-01
From the first airplanes steered by handles, wheels, and pedals to today's advanced aircraft, there has been a century of revolutionary inventions, all of them contributing to flight quality. The stability and controllability of aircraft as they appear to a pilot are called flying or handling qualities. Many years after the first airplanes flew, flying qualities were identified and ranked from desirable to unsatisfactory. Later on engineers developed design methods to satisfy these practical criteria. CONDUIT, which stands for Control Designer's Unified Interface, is a modern software package that provides a methodology for optimization of flight control systems in order to improve the flying qualities. CONDUIT is dependent on an the optimization engine called CONSOL-OPTCAD (C-O). C-O performs multicriterion parametric optimization. C-O was successfully tested on a variety of control problems. The optimization-based computational system, C-O, requires a particular control system description as a MATLAB file and possesses the ability to modify the vector of design parameters in an attempt to satisfy performance objectives and constraints specified by the designer, in a C-type file. After the first optimization attempts on the UH-60A control system, an early interface system, named GIFCORCODE (Graphical Interface for CONSOL-OPTCAD for Rotorcraft Controller Design) was created.
Systems Engineering News | Wind | NREL
News Systems Engineering News The Wind Plant Optimization and Systems Engineering newsletter covers range from multi-disciplinary design analysis and optimization of wind turbine sub-components to wind plant optimization and uncertainty analysis to concurrent engineering and financial engineering
Recent Experiences in Multidisciplinary Analysis and Optimization, part 1
NASA Technical Reports Server (NTRS)
Sobieski, J. (Compiler)
1984-01-01
Papers presented at the NASA Symposium on Recent Experiences in Multidisciplinary Analysis and Optimization held at NASA Langley Research Center, Hampton, Virginia April 24 to 26, 1984 are given. The purposes of the symposium were to exchange information about the status of the application of optimization and associated analyses in industry or research laboratories to real life problems and to examine the directions of future developments. Information exchange has encompassed the following: (1) examples of successful applications; (2) attempt and failure examples; (3) identification of potential applications and benefits; (4) synergistic effects of optimized interaction and trade-offs occurring among two or more engineering disciplines and/or subsystems in a system; and (5) traditional organization of a design process as a vehicle for or an impediment to the progress in the design methodology.
Mass-based design and optimization of wave rotors for gas turbine engine enhancement
NASA Astrophysics Data System (ADS)
Chan, S.; Liu, H.
2017-03-01
An analytic method aiming at mass properties was developed for the preliminary design and optimization of wave rotors. In the present method, we introduce the mass balance principle into the design and thus can predict and optimize the mass qualities as well as the performance of wave rotors. A dedicated least-square method with artificial weighting coefficients was developed to solve the over-constrained system in the mass-based design. This method and the adoption of the coefficients were validated by numerical simulation. Moreover, the problem of fresh air exhaustion (FAE) was put forward and analyzed, and exhaust gas recirculation (EGR) was investigated. Parameter analyses and optimization elucidated which designs would not only achieve the best performance, but also operate with minimum EGR and no FAE.
Modified Newton-Raphson GRAPE methods for optimal control of spin systems
NASA Astrophysics Data System (ADS)
Goodwin, D. L.; Kuprov, Ilya
2016-05-01
Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrix exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.
Optimal thrust level for orbit insertion
NASA Astrophysics Data System (ADS)
Cerf, Max
2017-07-01
The minimum-fuel orbital transfer is analyzed in the case of a launcher upper stage using a constantly thrusting engine. The thrust level is assumed to be constant and its value is optimized together with the thrust direction. A closed-loop solution for the thrust direction is derived from the extremal analysis for a planar orbital transfer. The optimal control problem reduces to two unknowns, namely the thrust level and the final time. Guessing and propagating the costates is no longer necessary and the optimal trajectory is easily found from a rough initialization. On the other hand the initial costates are assessed analytically from the initial conditions and they can be used as initial guess for transfers at different thrust levels. The method is exemplified on a launcher upper stage targeting a geostationary transfer orbit.
Potential applications of skip SMV with thrust engine
NASA Astrophysics Data System (ADS)
Wang, Weilin; Savvaris, Al
2016-11-01
This paper investigates the potential applications of Space Maneuver Vehicles (SMV) with skip trajectory. Due to soaring space operations over the past decades, the risk of space debris has considerably increased such as collision risks with space asset, human property on ground and even aviation. Many active debris removal methods have been investigated and in this paper, a debris remediation method is first proposed based on skip SMV. The key point is to perform controlled re-entry. These vehicles are expected to achieve a trans-atmospheric maneuver with thrust engine. If debris is released at altitude below 80 km, debris could be captured by the atmosphere drag force and re-entry interface prediction accuracy is improved. Moreover if the debris is released in a cargo at a much lower altitude, this technique protects high value space asset from break up by the atmosphere and improves landing accuracy. To demonstrate the feasibility of this concept, the present paper presents the simulation results for two specific mission profiles: (1) descent to predetermined altitude; (2) descent to predetermined point (altitude, longitude and latitude). The evolutionary collocation method is adopted for skip trajectory optimization due to its global optimality and high-accuracy. This method is actually a two-step optimization approach based on the heuristic algorithm and the collocation method. The optimal-control problem is transformed into a nonlinear programming problem (NLP) which can be efficiently and accurately solved by the sequential quadratic programming (SQP) procedure. However, such a method is sensitive to initial values. To reduce the sensitivity problem, genetic algorithm (GA) is adopted to refine the grids and provide near optimum initial values. By comparing the simulation data from different scenarios, it is found that skip SMV is feasible in active debris removal and the evolutionary collocation method gives a truthful re-entry trajectory that satisfies the path and boundary constraints.
Faruque, Imraan A; Muijres, Florian T; Macfarlane, Kenneth M; Kehlenbeck, Andrew; Humbert, J Sean
2018-06-01
This paper presents "optimal identification," a framework for using experimental data to identify the optimality conditions associated with the feedback control law implemented in the measurements. The technique compares closed loop trajectory measurements against a reduced order model of the open loop dynamics, and uses linear matrix inequalities to solve an inverse optimal control problem as a convex optimization that estimates the controller optimality conditions. In this study, the optimal identification technique is applied to two examples, that of a millimeter-scale micro-quadrotor with an engineered controller on board, and the example of a population of freely flying Drosophila hydei maneuvering about forward flight. The micro-quadrotor results show that the performance indices used to design an optimal flight control law for a micro-quadrotor may be recovered from the closed loop simulated flight trajectories, and the Drosophila results indicate that the combined effect of the insect longitudinal flight control sensing and feedback acts principally to regulate pitch rate.
Portable inference engine: An extended CLIPS for real-time production systems
NASA Technical Reports Server (NTRS)
Le, Thach; Homeier, Peter
1988-01-01
The present C-Language Integrated Production System (CLIPS) architecture has not been optimized to deal with the constraints of real-time production systems. Matching in CLIPS is based on the Rete Net algorithm, whose assumption of working memory stability might fail to be satisfied in a system subject to real-time dataflow. Further, the CLIPS forward-chaining control mechanism with a predefined conflict resultion strategy may not effectively focus the system's attention on situation-dependent current priorties, or appropriately address different kinds of knowledge which might appear in a given application. Portable Inference Engine (PIE) is a production system architecture based on CLIPS which attempts to create a more general tool while addressing the problems of real-time expert systems. Features of the PIE design include a modular knowledge base, a modified Rete Net algorithm, a bi-directional control strategy, and multiple user-defined conflict resolution strategies. Problems associated with real-time applications are analyzed and an explanation is given for how the PIE architecture addresses these problems.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
Passive motion paradigm: an alternative to optimal control.
Mohan, Vishwanathan; Morasso, Pietro
2011-01-01
IN THE LAST YEARS, OPTIMAL CONTROL THEORY (OCT) HAS EMERGED AS THE LEADING APPROACH FOR INVESTIGATING NEURAL CONTROL OF MOVEMENT AND MOTOR COGNITION FOR TWO COMPLEMENTARY RESEARCH LINES: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the "degrees of freedom (DoFs) problem," the common core of production, observation, reasoning, and learning of "actions." OCT, directly derived from engineering design techniques of control systems quantifies task goals as "cost functions" and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative "softer" approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that "animates" the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints "at runtime," hence solving the "DoFs problem" without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of "potential actions." In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.
The Topology Optimization Design Research for Aluminum Inner Panel of Automobile Engine Hood
NASA Astrophysics Data System (ADS)
Li, Minhao; Hu, Dongqing; Liu, Xiangzheng; Yuan, Huanquan
2017-11-01
This article discusses the topology optimization methods for automobile engine hood design. The aluminum inner panel of engine hood and mucilage glue regions are set as design areas, and the static performances of engine hood included modal frequency, lateral stiffness, torsional stiffness and indentation stiffness are set as the optimization objectives. The topology optimization results about different objective functions are contrasted for analysis. And based on the reasonable topology optimization result, a suited automobile engine hood designs are raised to further study. Finally, an automobile engine hood that good at all of static performances is designed, and a favorable topology optimization method is put forward for discussion.
A single-layer platform for Boolean logic and arithmetic through DNA excision in mammalian cells
Weinberg, Benjamin H.; Hang Pham, N. T.; Caraballo, Leidy D.; Lozanoski, Thomas; Engel, Adrien; Bhatia, Swapnil; Wong, Wilson W.
2017-01-01
Genetic circuits engineered for mammalian cells often require extensive fine-tuning to perform their intended functions. To overcome this problem, we present a generalizable biocomputing platform that can engineer genetic circuits which function in human cells with minimal optimization. We used our Boolean Logic and Arithmetic through DNA Excision (BLADE) platform to build more than 100 multi-input-multi-output circuits. We devised a quantitative metric to evaluate the performance of the circuits in human embryonic kidney and Jurkat T cells. Of 113 circuits analysed, 109 functioned (96.5%) with the correct specified behavior without any optimization. We used our platform to build a three-input, two-output Full Adder and six-input, one-output Boolean Logic Look Up Table. We also used BLADE to design circuits with temporal small molecule-mediated inducible control and circuits that incorporate CRISPR/Cas9 to regulate endogenous mammalian genes. PMID:28346402
Optical circulator analysis and optimization: a mini-project for physical optics
NASA Astrophysics Data System (ADS)
Wan, Zhujun
2017-08-01
One of the mini-projects for the course of physical optics is reported. The project is designed to increase comprehension on the basics and applications of polarized light and birefringent crystal. Firstly, the students are required to analyze the basic principle of an optical circulator based on birefringent crystal. Then, they need to consider the engineering optimization problems. The key tasks include analyzing the polarization transforming unit (composed of a half-waveplate and a Faraday rotator) based on Jones matrix, maximizing the walk-off angle between e-ray and o-ray in birefringent crystal, separating e-ray and o-ray symmetrically, employment of a transformed Wollaston prism for input/output coupling of optical beams to fibers. Three years' practice shows that the project is of moderate difficulty, while it covers most of the related knowledge required for the course and helps to train the engineering thinking.
Reducing neural network training time with parallel processing
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1995-01-01
Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.
Particulate emissions from diesel engines: correlation between engine technology and emissions.
Fiebig, Michael; Wiartalla, Andreas; Holderbaum, Bastian; Kiesow, Sebastian
2014-03-07
In the last 30 years, diesel engines have made rapid progress to increased efficiency, environmental protection and comfort for both light- and heavy-duty applications. The technical developments include all issues from fuel to combustion process to exhaust gas aftertreatment. This paper provides a comprehensive summary of the available literature regarding technical developments and their impact on the reduction of pollutant emission. This includes emission legislation, fuel quality, diesel engine- and exhaust gas aftertreatment technologies, as well as particulate composition, with a focus on the mass-related particulate emission of on-road vehicle applications. Diesel engine technologies representative of real-world on-road applications will be highlighted.Internal engine modifications now make it possible to minimize particulate and nitrogen oxide emissions with nearly no reduction in power. Among these modifications are cooled exhaust gas recirculation, optimized injections systems, adapted charging systems and optimized combustion processes with high turbulence. With introduction and optimization of exhaust gas aftertreatment systems, such as the diesel oxidation catalyst and the diesel particulate trap, as well as NOx-reduction systems, pollutant emissions have been significantly decreased. Today, sulfur poisoning of diesel oxidation catalysts is no longer considered a problem due to the low-sulfur fuel used in Europe. In the future, there will be an increased use of bio-fuels, which generally have a positive impact on the particulate emissions and do not increase the particle number emissions.Since the introduction of the EU emissions legislation, all emission limits have been reduced by over 90%. Further steps can be expected in the future. Retrospectively, the particulate emissions of modern diesel engines with respect to quality and quantity cannot be compared with those of older engines. Internal engine modifications lead to a clear reduction of the particulate emissions without a negative impact on the particulate-size distribution towards smaller particles. The residual particles can be trapped in a diesel particulate trap independent of their size or the engine operating mode. The usage of a wall-flow diesel particulate filter leads to an extreme reduction of the emitted particulate mass and number, approaching 100%. A reduced particulate mass emission is always connected to a reduced particle number emission.
Particulate emissions from diesel engines: correlation between engine technology and emissions
2014-01-01
In the last 30 years, diesel engines have made rapid progress to increased efficiency, environmental protection and comfort for both light- and heavy-duty applications. The technical developments include all issues from fuel to combustion process to exhaust gas aftertreatment. This paper provides a comprehensive summary of the available literature regarding technical developments and their impact on the reduction of pollutant emission. This includes emission legislation, fuel quality, diesel engine- and exhaust gas aftertreatment technologies, as well as particulate composition, with a focus on the mass-related particulate emission of on-road vehicle applications. Diesel engine technologies representative of real-world on-road applications will be highlighted. Internal engine modifications now make it possible to minimize particulate and nitrogen oxide emissions with nearly no reduction in power. Among these modifications are cooled exhaust gas recirculation, optimized injections systems, adapted charging systems and optimized combustion processes with high turbulence. With introduction and optimization of exhaust gas aftertreatment systems, such as the diesel oxidation catalyst and the diesel particulate trap, as well as NOx-reduction systems, pollutant emissions have been significantly decreased. Today, sulfur poisoning of diesel oxidation catalysts is no longer considered a problem due to the low-sulfur fuel used in Europe. In the future, there will be an increased use of bio-fuels, which generally have a positive impact on the particulate emissions and do not increase the particle number emissions. Since the introduction of the EU emissions legislation, all emission limits have been reduced by over 90%. Further steps can be expected in the future. Retrospectively, the particulate emissions of modern diesel engines with respect to quality and quantity cannot be compared with those of older engines. Internal engine modifications lead to a clear reduction of the particulate emissions without a negative impact on the particulate-size distribution towards smaller particles. The residual particles can be trapped in a diesel particulate trap independent of their size or the engine operating mode. The usage of a wall-flow diesel particulate filter leads to an extreme reduction of the emitted particulate mass and number, approaching 100%. A reduced particulate mass emission is always connected to a reduced particle number emission. PMID:24606725
Optimization in the systems engineering process
NASA Technical Reports Server (NTRS)
Lemmerman, Loren A.
1993-01-01
The essential elements of the design process consist of the mission definition phase that provides the system requirements, the conceptual design, the preliminary design and finally the detailed design. Mission definition is performed largely by operations analysts in conjunction with the customer. The result of their study is handed off to the systems engineers for documentation as the systems requirements. The document that provides these requirements is the basis for the further design work of the design engineers at the Lockheed-Georgia Company. The design phase actually begins with conceptual design, which is generally conducted by a small group of engineers using multidisciplinary design programs. Because of the complexity of the design problem, the analyses are relatively simple and generally dependent on parametric analyses of the configuration. The result of this phase is a baseline configuration from which preliminary design may be initiated.
The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis
A Rashid, Ahmad Safuan; Ali, Nazri
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652
The contribution of particle swarm optimization to three-dimensional slope stability analysis.
Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.
Engineering calculations for communications systems planning
NASA Technical Reports Server (NTRS)
Levis, C. A.; Martin, C. H.; Wang, C. W.; Gonsalvez, D.
1982-01-01
The single entry interference problem is treated for frequency sharing between the broadcasting satellite and intersatellite services near 23 GHz. It is recommended that very long (more than 120 longitude difference) intersatellite hops be relegated to the unshared portion of the band. When this is done, it is found that suitable orbit assignments can be determined easily with the aid of a set of universal curves. An attempt to develop synthesis procedures for optimally assigning frequencies and orbital slots for the broadcasting satellite service in region 2 was initiated. Several discrete programming and continuous optimization techniques are discussed.
Evolutionary computing for the design search and optimization of space vehicle power subsystems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook
2004-01-01
Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.
Application of Artificial Neural Networks to the Design of Turbomachinery Airfoils
NASA Technical Reports Server (NTRS)
Rai, Man Mohan; Madavan, Nateri
1997-01-01
Artificial neural networks are widely used in engineering applications, such as control, pattern recognition, plant modeling and condition monitoring to name just a few. In this seminar we will explore the possibility of applying neural networks to aerodynamic design, in particular, the design of turbomachinery airfoils. The principle idea behind this effort is to represent the design space using a neural network (within some parameter limits), and then to employ an optimization procedure to search this space for a solution that exhibits optimal performance characteristics. Results obtained for design problems in two spatial dimensions will be presented.
Sizing of complex structure by the integration of several different optimal design algorithms
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.
1974-01-01
Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.
Analysis of problems with dry fermentation process for biogas production
NASA Astrophysics Data System (ADS)
Pilát, Peter; Patsch, Marek; Jandačka, Jozef
2012-04-01
The technology of dry anaerobic fermentation is still meeting with some scepticism, and therefore in most biogas plants are used wet fermentation technology. Fermentation process would be not complete without an optimal controlled condition: dry matter content, density, pH, and in particular the reaction temperature. If is distrust of dry fermentation eligible it was on the workplace of the Department of Power Engineering at University of Zilina built an experimental small-scale biogas station that allows analysis of optimal parameters of the dry anaerobic fermentation, in particular, however, affect the reaction temperature on yield and quality of biogas.
Wang, Jiqiang
2016-03-01
Restricted sensing and actuation control represents an important area of research that has been overlooked in most of the design methodologies. In many practical control engineering problems, it is necessitated to implement the design through a single sensor and single actuator for multivariate performance variables. In this paper, a novel approach is proposed for the solution to the single sensor and single actuator control problem where performance over any prescribed frequency band can also be tailored. The results are obtained for the broad band control design based on the formulation for discrete frequency control. It is shown that the single sensor and single actuator control problem over a frequency band can be cast into a Nevanlinna-Pick interpolation problem. An optimal controller can then be obtained via the convex optimization over LMIs. Even remarkable is that robustness issues can also be tackled in this framework. A numerical example is provided for the broad band attenuation of rotor blade vibration to illustrate the proposed design procedures. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Drainage network optimization for inundation mitigation case study of ITS Surabaya
NASA Astrophysics Data System (ADS)
Savitri, Yang Ratri; Lasminto, Umboro
2017-06-01
Institut Teknologi Sepuluh Nopember (ITS) Surabaya is one of engineering campus in Surabaya with an area of ± 187 ha, which consists of building and campus facilities. The campus is supported by drainage system planned according to the ITS Master Plan on 2002. The drainage system is planned with numbers of retention and detention pond based on the city concept of Zero Delta Q concept. However, in the rainy season, it frequently has inundation problems in several locations. The problems could be identified from two major sources, namely the internal campus facilities and external condition connected with the city drainage system. This paper described the capabilities of drainage network optimization to mitigate local urban drainage problem. The hydrology-hydraulic investigation was done by utilizing the Storm Water Management Model (SWMM) developed by US Environmental Protection Agency (EPA). The mitigation is based on several alternative that based on the existing condition and regarding the social problem. The study results showed that the management of the flow from external source could reduce final stored volume of the campus main channel by 31.75 %.
Evaluating a Web-Based Interface for Internet Telemedicine
NASA Technical Reports Server (NTRS)
Lathan, Corinna E.; Newman, Dava J.; Sebrechts, Marc M.; Doarn, Charles R.
1997-01-01
The objective is to introduce the usability engineering methodology, heuristic evaluation, to the design and development of a web-based telemedicine system. Using a set of usability criteria, or heuristics, one evaluator examined the Spacebridge to Russia web-site for usability problems. Thirty-four usability problems were found in this preliminary study and all were assigned a severity rating. The value of heuristic analysis in the iterative design of a system is shown because the problems can be fixed before deployment of a system and the problems are of a different nature than those found by actual users of the system. It was therefore determined that there is potential value of heuristic evaluation paired with user testing as a strategy for optimal system performance design.
Multi-objective engineering design using preferences
NASA Astrophysics Data System (ADS)
Sanchis, J.; Martinez, M.; Blasco, X.
2008-03-01
System design is a complex task when design parameters have to satisy a number of specifications and objectives which often conflict with those of others. This challenging problem is called multi-objective optimization (MOO). The most common approximation consists in optimizing a single cost index with a weighted sum of objectives. However, once weights are chosen the solution does not guarantee the best compromise among specifications, because there is an infinite number of solutions. A new approach can be stated, based on the designer's experience regarding the required specifications and the associated problems. This valuable information can be translated into preferences for design objectives, and will lead the search process to the best solution in terms of these preferences. This article presents a new method, which enumerates these a priori objective preferences. As a result, a single objective is built automatically and no weight selection need be performed. Problems occuring because of the multimodal nature of the generated single cost index are managed with genetic algorithms (GAs).
Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)
2002-01-01
Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.
Neutron optics concept for the materials engineering diffractometer at the ESS
NASA Astrophysics Data System (ADS)
Šaroun, J.; Fenske, J.; Rouijaa, M.; Beran, P.; Navrátil, J.; Lukáš, P.; Schreyer, A.; Strobl, M.
2016-09-01
The Beamline for European Materials Engineering Research (BEER) has been recently proposed to be built at the European Spallation Source (ESS). The presented concept of neutron delivery optics for this instrument addresses the problems of bi-spectral beam extraction from a small moderator, optimization of neutron guides profile for long-range neutron transport and focusing at the sample under various constraints. They include free space before and after the guides, a narrow guide section with gaps for choppers, closing of direct line of sight and cost reduction by optimization of the guides cross-section and coating. A system of slits and exchangeable focusing optics is proposed in order to match various wavelength resolution options provided by the pulse shaping and modulation choppers, which permits to efficiently trade resolution for intensity in a wide range. Simulated performance characteristics such as brilliance transfer ratio are complemented by the analysis of the histories of “useful” neutrons obtained by back tracing neutrons hitting the sample, which helps to optimize some of the neutron guide parameters such as supermirror coating.
Advanced launch system trajectory optimization using suboptimal control
NASA Technical Reports Server (NTRS)
Shaver, Douglas A.; Hull, David G.
1993-01-01
The maximum-final mass trajectory of a proposed configuration of the Advanced Launch System is presented. A model for the two-stage rocket is given; the optimal control problem is formulated as a parameter optimization problem; and the optimal trajectory is computed using a nonlinear programming code called VF02AD. Numerical results are presented for the controls (angle of attack and velocity roll angle) and the states. After the initial rotation, the angle of attack goes to a positive value to keep the trajectory as high as possible, returns to near zero to pass through the transonic regime and satisfy the dynamic pressure constraint, returns to a positive value to keep the trajectory high and to take advantage of minimum drag at positive angle of attack due to aerodynamic shading of the booster, and then rolls off to negative values to satisfy the constraints. Because the engines cannot be throttled, the maximum dynamic pressure occurs at a single point; there is no maximum dynamic pressure subarc. To test approximations for obtaining analytical solutions for guidance, two additional optimal trajectories are computed: one using untrimmed aerodynamics and one using no atmospheric effects except for the dynamic pressure constraint. It is concluded that untrimmed aerodynamics has a negligible effect on the optimal trajectory and that approximate optimal controls should be able to be obtained by treating atmospheric effects as perturbations.
Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers
NASA Astrophysics Data System (ADS)
Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.
A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.
NASA Astrophysics Data System (ADS)
Aktan, A. Emin
2003-08-01
Although the interconnected systems nature of the infrastructures, and the complexity of interactions between their engineered, socio-technical and natural constituents have been recognized for some time, the principles of effectively operating, protecting and preserving such systems by taking full advantage of "modeling, simulations, optimization, control and decision making" tools developed by the systems engineering and operations research community have not been adequately studied or discussed by many engineers including the writer. Differential and linear equation systems, numerical and finite element modeling techniques, statistical and probabilistic representations are universal, however, different disciplines have developed their distinct approaches to conceptualizing, idealizing and modeling the systems they commonly deal with. The challenge is in adapting and integrating deterministic and stochastic, geometric and numerical, physics-based and "soft (data-or-knowledge based)", macroscopic or microscopic models developed by various disciplines for simulating infrastructure systems. There is a lot to be learned by studying how different disciplines have studied, improved and optimized the systems relating to various processes and products in their domains. Operations research has become a fifty-year old discipline addressing complex systems problems. Its mathematical tools range from linear programming to decision processes and game theory. These tools are used extensively in management and finance, as well as by industrial engineers for optimizing and quality control. Progressive civil engineering academic programs have adopted "systems engineering" as a focal area. However, most of the civil engineering systems programs remain focused on constructing and analyzing highly idealized, often generic models relating to the planning or operation of transportation, water or waste systems, maintenance management, waste management or general infrastructure hazards risk management. We further note that in the last decade there have been efforts for "agent-based" modeling of synthetic infrastructure systems by taking advantage of supercomputers at various DOE Laboratories. However, whether there is any similitude between such synthetic and actual systems needs investigating further.
Aerodynamic Shape Optimization Using Hybridized Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2003-01-01
An aerodynamic shape optimization method that uses an evolutionary algorithm known at Differential Evolution (DE) in conjunction with various hybridization strategies is described. DE is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Various hybridization strategies for DE are explored, including the use of neural networks as well as traditional local search methods. A Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the hybrid DE optimizer. The method is implemented on distributed parallel computers so that new designs can be obtained within reasonable turnaround times. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. (The final paper will include at least one other aerodynamic design application). The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated.
Optimization study on the primary mirror lightweighting of a remote sensing instrument
NASA Astrophysics Data System (ADS)
Chan, Chia-Yen; Huang, Bo-Kai; You, Zhen-Ting; Chen, Yi-Cheng; Huang, Ting-Ming
2015-07-01
Remote sensing instrument (RSI) is used to take images for ground surface observation, which will be exposed to high vacuum, high temperature difference, gravity, 15 g-force and random vibration conditions and other harsh environments during operation. While designing a RSI optical system, not only the optical quality but also the strength of mechanical structure we should be considered. As a result, an optimization method is adopted to solve this engineering problem. In the study, a ZERODUR® mirror with a diameter of 466 mm has been chosen as the model and the optimization has been executed by combining the computer-aided design, finite element analysis, and parameter optimization software. The optimization is aimed to obtain the most lightweight mirror with maintaining structural rigidity and good optical quality. Finally, the optimum optical mirror with a lightweight ratio of 0.55 is attained successfully.
Ascent performance feasibility for next-generation spacecraft
NASA Astrophysics Data System (ADS)
Mancuso, Salvatore Massimo
This thesis deals with the optimization of the ascent trajectories for single-stage suborbital (SSSO), single-stage-to-orbit (SSTO), and two-stage-to-orbit (TSTO) rocket-powered spacecraft. The maximum payload weight problem has been solved using the sequential gradient-restoration algorithm. For the TSTO case, some modifications to the original version of the algorithm have been necessary in order to deal with discontinuities due to staging and the fact that the functional being minimized depends on interface conditions. The optimization problem is studied for different values of the initial thrust-to-weight ratio in the range 1.3 to 1.6, engine specific impulse in the range 400 to 500 sec, and spacecraft structural factor in the range 0.08 to 0.12. For the TSTO configuration, two subproblems are studied: uniform structural factor between stages and nonuniform structural factor between stages. Due to the regular behavior of the results obtained, engineering approximations have been developed which connect the maximum payload weight to the engine specific impulse and spacecraft structural factor; in turn, this leads to useful design considerations. Also, performance sensitivity to the scale of the aerodynamic drag is studied, and it is shown that its effect on payload weight is relatively small, even for drag changes approaching ± 50%. The main conclusions are that: the design of a SSSO configuration appears to be feasible; the design of a SSTO configuration might be comfortably feasible, marginally feasible, or unfeasible, depending on the parameter values assumed; the design of a TSTO configuration is not only feasible, but its payload appears to be considerably larger than that of a SSTO configuration. Improvements in engine specific impulse and spacecraft structural factor are desirable and crucial for SSTO feasibility; indeed, it appears that aerodynamic improvements do not yield significant improvements in payload weight.
Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.
Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S
2017-01-01
Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.
Optimization of new magnetorheological fluid mount for vibration control of start/stop engine mode
NASA Astrophysics Data System (ADS)
Chung, Jye Ung; Phu, Do Xuan; Choi, Seung-Bok
2015-04-01
The technologies related to saving energy/or green vehicles are actively researched. In this tendency, the problem for reducing exhausted gas is in development with various ways. Those efforts are directly related to the operation of engine which emits exhausted gas. The auto start/stop of vehicle engine when a vehicle stop at road is currently as a main stream of vehicle industry resulting in reducing exhausted gas. However, this technology automatically turns on and off engine frequently. This motion induces vehicle engine to transmit vibration of engine which has large displacement, and torsional impact to chassis. These vibrations causing uncomfortable feeling to passengers are transmitted through the steering wheel and the gear knob. In this work, in order to resolve this vibration issue, a new proposed magnetorheological (MR) fluid based engine mount (MR mount in short) is presented. The proposed MR mount is designed to satisfy large damping force in various frequency ranges. It is shown that the proposed mount can have large damping force and large force ratio which is enough to control unwanted vibrations of engine start/stop mode.
Optimizing Department of Defense Acquisition Development Test and Evaluation Scheduling
2015-06-01
CPM Critical Path Method DOD Department of Defense DT&E development test and evaluation EMD engineering and manufacturing development GAMS...these, including the Program Evaluation Review Technique (PERT), the Critical Path Method ( CPM ), and the resource- constrained project-scheduling...problem (RCPSP). These are of particular interest to this thesis as the current scheduling method uses elements of the PERT/ CPM , and the test
An Approximate Dynamic Programming Mode for Optimal MEDEVAC Dispatching
2015-03-26
over the myopic policy. This indicates the ADP policy is efficiently managing resources by 28 not immediately sending the nearest available MEDEVAC...DISPATCHING THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology...medical evacuation (MEDEVAC) dispatch policies. To solve the MDP, we apply an ap- proximate dynamic programming (ADP) technique. The problem of deciding
NASA Astrophysics Data System (ADS)
Recent advances in the analytical and numerical treatment of physical and engineering problems are discussed in reviews and reports. Topics addressed include fluid mechanics, numerical methods for differential equations, FEM approaches, and boundary-element methods. Consideration is given to optimization, decision theory, stochastics, actuarial mathematics, applied mathematics and mathematical physics, and numerical analysis.
Robust Design Optimization via Failure Domain Bounding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2007-01-01
This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
A technique for integrating engine cycle and aircraft configuration optimization
NASA Technical Reports Server (NTRS)
Geiselhart, Karl A.
1994-01-01
A method for conceptual aircraft design that incorporates the optimization of major engine design variables for a variety of cycle types was developed. The methodology should improve the lengthy screening process currently involved in selecting an appropriate engine cycle for a given application or mission. The new capability will allow environmental concerns such as airport noise and emissions to be addressed early in the design process. The ability to rapidly perform optimization and parametric variations using both engine cycle and aircraft design variables, and to see the impact on the aircraft, should provide insight and guidance for more detailed studies. A brief description of the aircraft performance and mission analysis program and the engine cycle analysis program that were used is given. A new method of predicting propulsion system weight and dimensions using thermodynamic cycle data, preliminary design, and semi-empirical techniques is introduced. Propulsion system performance and weights data generated by the program are compared with industry data and data generated using well established codes. The ability of the optimization techniques to locate an optimum is demonstrated and some of the problems that had to be solved to accomplish this are illustrated. Results from the application of the program to the analysis of three supersonic transport concepts installed with mixed flow turbofans are presented. The results from the application to a Mach 2.4, 5000 n.mi. transport indicate that the optimum bypass ratio is near 0.45 with less than 1 percent variation in minimum gross weight for bypass ratios ranging from 0.3 to 0.6. In the final application of the program, a low sonic boom fix a takeoff gross weight concept that would fly at Mach 2.0 overwater and at Mach 1.6 overland is compared with a baseline concept of the same takeoff gross weight that would fly Mach 2.4 overwater and subsonically overland. The results indicate that for the design mission, the low boom concept has a 5 percent total range penalty relative to the baseline. Additional cycles were optimized for various design overland distances and the effect of flying off-design overland distances is illustrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodwin, D. L.; Kuprov, Ilya, E-mail: i.kuprov@soton.ac.uk
Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrixmore » exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.« less
A Standard Platform for Testing and Comparison of MDAO Architectures
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.
2012-01-01
The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.; McCorkle, D.; Yang, C.
Process modeling and simulation tools are widely used for the design and operation of advanced power generation systems. These tools enable engineers to solve the critical process systems engineering problems that arise throughout the lifecycle of a power plant, such as designing a new process, troubleshooting a process unit or optimizing operations of the full process. To analyze the impact of complex thermal and fluid flow phenomena on overall power plant performance, the Department of Energy’s (DOE) National Energy Technology Laboratory (NETL) has developed the Advanced Process Engineering Co-Simulator (APECS). The APECS system is an integrated software suite that combinesmore » process simulation (e.g., Aspen Plus) and high-fidelity equipment simulations such as those based on computational fluid dynamics (CFD), together with advanced analysis capabilities including case studies, sensitivity analysis, stochastic simulation for risk/uncertainty analysis, and multi-objective optimization. In this paper we discuss the initial phases of the integration of the APECS system with the immersive and interactive virtual engineering software, VE-Suite, developed at Iowa State University and Ames Laboratory. VE-Suite uses the ActiveX (OLE Automation) controls in the Aspen Plus process simulator wrapped by the CASI library developed by Reaction Engineering International to run process/CFD co-simulations and query for results. This integration represents a necessary step in the development of virtual power plant co-simulations that will ultimately reduce the time, cost, and technical risk of developing advanced power generation systems.« less
NASA Astrophysics Data System (ADS)
Ogren, Ryan M.
For this work, Hybrid PSO-GA and Artificial Bee Colony Optimization (ABC) algorithms are applied to the optimization of experimental diesel engine performance, to meet Environmental Protection Agency, off-road, diesel engine standards. This work is the first to apply ABC optimization to experimental engine testing. All trials were conducted at partial load on a four-cylinder, turbocharged, John Deere engine using neat-Biodiesel for PSO-GA and regular pump diesel for ABC. Key variables were altered throughout the experiments, including, fuel pressure, intake gas temperature, exhaust gas recirculation flow, fuel injection quantity for two injections, pilot injection timing and main injection timing. Both forms of optimization proved effective for optimizing engine operation. The PSO-GA hybrid was able to find a superior solution to that of ABC within fewer engine runs. Both solutions call for high exhaust gas recirculation to reduce oxide of nitrogen (NOx) emissions while also moving pilot and main fuel injections to near top dead center for improved tradeoffs between NOx and particulate matter.
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared with efficient gradient methods. Applicaiton of GA is underway for a cost optimization study for a launch-vehicle fuel-tank and structural design of a wing. The strengths and limitations of GA for launch vehicle design optimization is studied.
A centre-free approach for resource allocation with lower bounds
NASA Astrophysics Data System (ADS)
Obando, Germán; Quijano, Nicanor; Rakoto-Ravalontsalama, Naly
2017-09-01
Since complexity and scale of systems are continuously increasing, there is a growing interest in developing distributed algorithms that are capable to address information constraints, specially for solving optimisation and decision-making problems. In this paper, we propose a novel method to solve distributed resource allocation problems that include lower bound constraints. The optimisation process is carried out by a set of agents that use a communication network to coordinate their decisions. Convergence and optimality of the method are guaranteed under some mild assumptions related to the convexity of the problem and the connectivity of the underlying graph. Finally, we compare our approach with other techniques reported in the literature, and we present some engineering applications.
NASA Technical Reports Server (NTRS)
Bair, E. K.
1986-01-01
The System Trades Study and Design Methodology Plan is used to conduct trade studies to define the combination of Space Shuttle Main Engine features that will optimize candidate engine configurations. This is accomplished by using vehicle sensitivities and engine parametric data to establish engine chamber pressure and area ratio design points for candidate engine configurations. Engineering analyses are to be conducted to refine and optimize the candidate configurations at their design points. The optimized engine data and characteristics are then evaluated and compared against other candidates being considered. The Evaluation Criteria Plan is then used to compare and rank the optimized engine configurations on the basis of cost.
Low-Thrust Trajectory Optimization with Simplified SQP Algorithm
NASA Technical Reports Server (NTRS)
Parrish, Nathan L.; Scheeres, Daniel J.
2017-01-01
The problem of low-thrust trajectory optimization in highly perturbed dynamics is a stressing case for many optimization tools. Highly nonlinear dynamics and continuous thrust are each, separately, non-trivial problems in the field of optimal control, and when combined, the problem is even more difficult. This paper de-scribes a fast, robust method to design a trajectory in the CRTBP (circular restricted three body problem), beginning with no or very little knowledge of the system. The approach is inspired by the SQP (sequential quadratic programming) algorithm, in which a general nonlinear programming problem is solved via a sequence of quadratic problems. A few key simplifications make the algorithm presented fast and robust to initial guess: a quadratic cost function, neglecting the line search step when the solution is known to be far away, judicious use of end-point constraints, and mesh refinement on multiple shooting with fixed-step integration.In comparison to the traditional approach of plugging the problem into a “black-box” NLP solver, the methods shown converge even when given no knowledge of the solution at all. It was found that the only piece of information that the user needs to provide is a rough guess for the time of flight, as the transfer time guess will dictate which set of local solutions the algorithm could converge on. This robustness to initial guess is a compelling feature, as three-body orbit transfers are challenging to design with intuition alone. Of course, if a high-quality initial guess is available, the methods shown are still valid.We have shown that endpoints can be efficiently constrained to lie on 3-body repeating orbits, and that time of flight can be optimized as well. When optimizing the endpoints, we must make a trade between converging quickly on sub-optimal endpoints or converging more slowly on end-points that are arbitrarily close to optimal. It is easy for the mission design engineer to adjust this trade based on the problem at hand.The biggest limitation to the algorithm at this point is that multi-revolution transfers (greater than 2 revolutions) do not work nearly as well. This restriction comes in because the relationship between node 1 and node N becomes increasingly nonlinear as the angular distance grows. Trans-fers with more than about 1.5 complete revolutions generally require the line search to improve convergence. Future work includes: Comparison of this algorithm with other established tools; improvements to how multiple-revolution transfers are handled; parallelization of the Jacobian computation; in-creased efficiency for the line search; and optimization of many more trajectories between a variety of 3-body orbits.
2012-01-01
Background Elementary mode (EM) analysis is ideally suited for metabolic engineering as it allows for an unbiased decomposition of metabolic networks in biologically meaningful pathways. Recently, constrained minimal cut sets (cMCS) have been introduced to derive optimal design strategies for strain improvement by using the full potential of EM analysis. However, this approach does not allow for the inclusion of regulatory information. Results Here we present an alternative, novel and simple method for the prediction of cMCS, which allows to account for boolean transcriptional regulation. We use binary linear programming and show that the design of a regulated, optimal metabolic network of minimal functionality can be formulated as a standard optimization problem, where EM and regulation show up as constraints. We validated our tool by optimizing ethanol production in E. coli. Our study showed that up to 70% of the predicted cMCS contained non-enzymatic, non-annotated reactions, which are difficult to engineer. These cMCS are automatically excluded by our approach utilizing simple weight functions. Finally, due to efficient preprocessing, the binary program remains computationally feasible. Conclusions We used integer programming to predict efficient deletion strategies to metabolically engineer a production organism. Our formulation utilizes the full potential of cMCS but adds additional flexibility to the design process. In particular our method allows to integrate regulatory information into the metabolic design process and explicitly favors experimentally feasible deletions. Our method remains manageable even if millions or potentially billions of EM enter the analysis. We demonstrated that our approach is able to correctly predict the most efficient designs for ethanol production in E. coli. PMID:22898474
Optimization in the systems engineering process
NASA Technical Reports Server (NTRS)
Lemmerman, L. A.
1984-01-01
The objective is to look at optimization as it applies to the design process at a large aircraft company. The design process at Lockheed-Georgia is described. Some examples of the impact that optimization has had on that process are given, and then some areas that must be considered if optimization is to be successful and supportive in the total design process are indicated. Optimization must continue to be sold and this selling is best done by consistent good performance. For this good performance to occur, the future approaches must be clearly thought out so that the optimization methods solve the problems that actually occur during design. The visibility of the design process must be maintained as further developments are proposed. Careful attention must be given to the management of data in the optimization process, both for technical reasons and for administrative purposes. Finally, to satisfy program needs, provisions must be included to supply data to support program decisions, and to communicate with design processes outside of the optimization process. If designers fail to adequately consider all of these needs, the future acceptance of optimization will be impeded.
Geng, Steven B.; Cheung, Jason K.; Narasimhan, Chakravarthy; Shameem, Mohammed; Tessier, Peter M.
2014-01-01
A limitation of using monoclonal antibodies as therapeutic molecules is their propensity to associate with themselves and/or with other molecules via non-affinity (colloidal) interactions. This can lead to a variety of problems ranging from low solubility and high viscosity to off-target binding and fast antibody clearance. Measuring such colloidal interactions is challenging given that they are weak and potentially involve diverse target molecules. Nevertheless, assessing these weak interactions – especially during early antibody discovery and lead candidate optimization – is critical to preventing problems that can arise later in the development process. Here we review advances in developing and implementing sensitive methods for measuring antibody colloidal interactions as well as using these measurements for guiding antibody selection and engineering. These systematic efforts to minimize non-affinity interactions are expected to yield more effective and stable monoclonal antibodies for diverse therapeutic applications. PMID:25209466
A predictive machine learning approach for microstructure optimization and materials design
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; ...
2015-06-23
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniquenessmore » of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. In conclusion, experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.« less
Production and engineering methods for CARB: TEK (trade name) batteries in fork lift trucks
NASA Astrophysics Data System (ADS)
Schaefer, J. C.
1975-03-01
The purpose of this program is to develop the manufacturing technology of the Carb Tek molten salt Li/Cl battery to the prototype level. This purpose is being accomplished by actually constructing cells on a pilot line, optimizing process steps, establishing quality control procedures, and engineering appropriate changes. The majority of the cell work is performed in a controlled argon atmosphere. Results show that the carbon selected for the cell cathode can develop the required 5 Whr/cubic inch even when damaged by stress cracks. Anode contamination and fabrication problems have been reduced by a new alloying technique. Cell yields are dependent on weld quality.
Automatic Differentiation as a tool in engineering design
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.; Hall, Laura E.
1992-01-01
Automatic Differentiation (AD) is a tool that systematically implements the chain rule of differentiation to obtain the derivatives of functions calculated by computer programs. In this paper, it is assessed as a tool for engineering design. The paper discusses the forward and reverse modes of AD, their computing requirements, and approaches to implementing AD. It continues with application to two different tools to two medium-size structural analysis problems to generate sensitivity information typically necessary in an optimization or design situation. The paper concludes with the observation that AD is to be preferred to finite differencing in most cases, as long as sufficient computer storage is available.
Improvement of ecological characteristics of the hydrogen diesel engine
NASA Astrophysics Data System (ADS)
Natriashvili, T.; Kavtaradze, R.; Glonti, M.
2018-02-01
In the article are considered the questions of influence of a swirl intensity of the shot and injector design on the ecological indices of the hydrogen diesel, little-investigated till now. The necessity of solution of these problems arises at conversion of the serial diesel engine into the hydrogen diesel. The mathematical model consists of the three-dimensional non-stationary equations of transfer and also models of turbulence and combustion. The numerical experiments have been carried out with the use of program code FIRE. The optimal values of parameters of the working process, ensuring improvement of the effective and ecological indices of the hydrogen diesel are determined.
Design considerations and challenges for mechanical stretch bioreactors in tissue engineering.
Lei, Ying; Ferdous, Zannatul
2016-05-01
With the increase in average life expectancy and growing aging population, lack of functional grafts for replacement surgeries has become a severe problem. Engineered tissues are a promising alternative to this problem because they can mimic the physiological function of the native tissues and be cultured on demand. Cyclic stretch is important for developing many engineered tissues such as hearts, heart valves, muscles, and bones. Thus a variety of stretch bioreactors and corresponding scaffolds have been designed and tested to study the underlying mechanism of tissue formation and to optimize the mechanical conditions applied to the engineered tissues. In this review, we look at various designs of stretch bioreactors and common scaffolds and offer insights for future improvements in tissue engineering applications. First, we summarize the requirements and common configuration of stretch bioreactors. Next, we present the features of different actuating and motion transforming systems and their applications. Since most bioreactors must measure detailed distributions of loads and deformations on engineered tissues, techniques with high accuracy, precision, and frequency have been developed. We also cover the key points in designing culture chambers, nutrition exchanging systems, and regimens used for specific tissues. Since scaffolds are essential for providing biophysical microenvironments for residing cells, we discuss materials and technologies used in fabricating scaffolds to mimic anisotropic native tissues, including decellularized tissues, hydrogels, biocompatible polymers, electrospinning, and 3D bioprinting techniques. Finally, we present the potential future directions for improving stretch bioreactors and scaffolds. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:543-553, 2016. © 2016 American Institute of Chemical Engineers.
History matching through dynamic decision-making
Maschio, Célio; Santos, Antonio Alberto; Schiozer, Denis; Rocha, Anderson
2017-01-01
History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy. This work introduces a dynamic decision-making optimization framework for history matching problems in which new models are generated based on, and guided by, the dynamic analysis of the data of available solutions. The optimization framework follows a ‘learning-from-data’ approach, and includes two optimizer components that use machine learning techniques, such as unsupervised learning and statistical analysis, to uncover patterns of input attributes that lead to good output responses. These patterns are used to support the decision-making process while generating new, and better, history matched solutions. The proposed framework is applied to a benchmark model (UNISIM-I-H) based on the Namorado field in Brazil. Results show the potential the dynamic decision-making optimization framework has for improving the quality of history matching solutions using a substantial smaller number of simulations when compared with a previous work on the same benchmark. PMID:28582413
NASA Astrophysics Data System (ADS)
Wang, Fengwen
2018-05-01
This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Cankorur-Cetinkaya, Ayca; Dias, Joao M L; Kludas, Jana; Slater, Nigel K H; Rousu, Juho; Oliver, Stephen G; Dikicioglu, Duygu
2017-06-01
Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple-to-use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257).
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Karaman, Abdullah
2017-04-01
We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.
1984-07-01
engineers use technology to design optimal information systems that will provide clear, correct data to help managers solve current problems (Keen...is in a changing environment (Hedberg and Jonsson, 1978; Mowshowitz, 1976; Hedberg, 1981; Hedberg, Nystrom, and Starbuck , 1976). Technology based...organizations learn and unlearn." In P. C. Nystrom, and W. H. Starbuck (eds.), Handbook of Organizational Design, Vol. 1. Amsterdam: Elsevier Scientific
[Simulation and Design of Infant Incubator Assembly Line].
Ke, Huqi; Hu, Xiaoyong; Ge, Xia; Hu, Yanhai; Chen, Zaihong
2015-11-01
According to current assembly situation of infant incubator in company A, basic industrial engineering means such as time study was used to analyze the actual products assembly production and an assembly line was designed. The assembly line was modeled and simulated with software Flexsim. The problem of the assembly line was found by comparing simulation result and actual data, then through optimization to obtain high efficiency assembly line.
2014-04-01
surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
a Fully Automated Pipeline for Classification Tasks with AN Application to Remote Sensing
NASA Astrophysics Data System (ADS)
Suzuki, K.; Claesen, M.; Takeda, H.; De Moor, B.
2016-06-01
Nowadays deep learning has been intensively in spotlight owing to its great victories at major competitions, which undeservedly pushed `shallow' machine learning methods, relatively naive/handy algorithms commonly used by industrial engineers, to the background in spite of their facilities such as small requisite amount of time/dataset for training. We, with a practical point of view, utilized shallow learning algorithms to construct a learning pipeline such that operators can utilize machine learning without any special knowledge, expensive computation environment, and a large amount of labelled data. The proposed pipeline automates a whole classification process, namely feature-selection, weighting features and the selection of the most suitable classifier with optimized hyperparameters. The configuration facilitates particle swarm optimization, one of well-known metaheuristic algorithms for the sake of generally fast and fine optimization, which enables us not only to optimize (hyper)parameters but also to determine appropriate features/classifier to the problem, which has conventionally been a priori based on domain knowledge and remained untouched or dealt with naïve algorithms such as grid search. Through experiments with the MNIST and CIFAR-10 datasets, common datasets in computer vision field for character recognition and object recognition problems respectively, our automated learning approach provides high performance considering its simple setting (i.e. non-specialized setting depending on dataset), small amount of training data, and practical learning time. Moreover, compared to deep learning the performance stays robust without almost any modification even with a remote sensing object recognition problem, which in turn indicates that there is a high possibility that our approach contributes to general classification problems.
Pareto-front shape in multiobservable quantum control
NASA Astrophysics Data System (ADS)
Sun, Qiuyang; Wu, Re-Bing; Rabitz, Herschel
2017-03-01
Many scenarios in the sciences and engineering require simultaneous optimization of multiple objective functions, which are usually conflicting or competing. In such problems the Pareto front, where none of the individual objectives can be further improved without degrading some others, shows the tradeoff relations between the competing objectives. This paper analyzes the Pareto-front shape for the problem of quantum multiobservable control, i.e., optimizing the expectation values of multiple observables in the same quantum system. Analytic and numerical results demonstrate that with two commuting observables the Pareto front is a convex polygon consisting of flat segments only, while with noncommuting observables the Pareto front includes convexly curved segments. We also assess the capability of a weighted-sum method to continuously capture the points along the Pareto front. Illustrative examples with realistic physical conditions are presented, including NMR control experiments on a 1H-13C two-spin system with two commuting or noncommuting observables.
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
McNerney, Monica P.; Watstein, Daniel M.; Styczynski, Mark P.
2015-01-01
Metabolic engineering is generally focused on static optimization of cells to maximize production of a desired product, though recently dynamic metabolic engineering has explored how metabolic programs can be varied over time to improve titer. However, these are not the only types of applications where metabolic engineering could make a significant impact. Here, we discuss a new conceptual framework, termed “precision metabolic engineering,” involving the design and engineering of systems that make different products in response to different signals. Rather than focusing on maximizing titer, these types of applications typically have three hallmarks: sensing signals that determine the desired metabolic target, completely directing metabolic flux in response to those signals, and producing sharp responses at specific signal thresholds. In this review, we will first discuss and provide examples of precision metabolic engineering. We will then discuss each of these hallmarks and identify which existing metabolic engineering methods can be applied to accomplish those tasks, as well as some of their shortcomings. Ultimately, precise control of metabolic systems has the potential to enable a host of new metabolic engineering and synthetic biology applications for any problem where flexibility of response to an external signal could be useful. PMID:26189665
Software Tools to Support the Assessment of System Health
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
2013-01-01
This presentation provides an overview of three software tools that were developed by the NASA Glenn Research Center to support the assessment of system health: the Propulsion Diagnostic Method Evaluation Strategy (ProDIMES), the Systematic Sensor Selection Strategy (S4), and the Extended Testability Analysis (ETA) tool. Originally developed to support specific NASA projects in aeronautics and space, these software tools are currently available to U.S. citizens through the NASA Glenn Software Catalog. The ProDiMES software tool was developed to support a uniform comparison of propulsion gas path diagnostic methods. Methods published in the open literature are typically applied to dissimilar platforms with different levels of complexity. They often address different diagnostic problems and use inconsistent metrics for evaluating performance. As a result, it is difficult to perform a one ]to ]one comparison of the various diagnostic methods. ProDIMES solves this problem by serving as a theme problem to aid in propulsion gas path diagnostic technology development and evaluation. The overall goal is to provide a tool that will serve as an industry standard, and will truly facilitate the development and evaluation of significant Engine Health Management (EHM) capabilities. ProDiMES has been developed under a collaborative project of The Technical Cooperation Program (TTCP) based on feedback provided by individuals within the aircraft engine health management community. The S4 software tool provides a framework that supports the optimal selection of sensors for health management assessments. S4 is structured to accommodate user ]defined applications, diagnostic systems, search techniques, and system requirements/constraints. One or more sensor suites that maximize this performance while meeting other user ]defined system requirements that are presumed to exist. S4 provides a systematic approach for evaluating combinations of sensors to determine the set or sets of sensors that optimally meet the performance goals and the constraints. It identifies optimal sensor suite solutions by utilizing a merit (i.e., cost) function with one of several available optimization approaches. As part of its analysis, S4 can expose fault conditions that are difficult to diagnose due to an incomplete diagnostic philosophy and/or a lack of sensors. S4 was originally developed and applied to liquid rocket engines. It was subsequently used to study the optimized selection of sensors for a simulation ]based aircraft engine diagnostic system. The ETA Tool is a software ]based analysis tool that augments the testability analysis and reporting capabilities of a commercial ]off ]the ]shelf (COTS) package. An initial diagnostic assessment is performed by the COTS software using a user ]developed, qualitative, directed ]graph model of the system being analyzed. The ETA Tool accesses system design information captured within the model and the associated testability analysis output to create a series of six reports for various system engineering needs. These reports are highlighted in the presentation. The ETA Tool was developed by NASA to support the verification of fault management requirements early in the Launch Vehicle process. Due to their early development during the design process, the TEAMS ]based diagnostic model and the ETA Tool were able to positively influence the system design by highlighting gaps in failure detection, fault isolation, and failure recovery.
Exact parallel algorithms for some members of the traveling salesman problem family
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekny, J.F.
1989-01-01
The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less
Synthetic Vortex Generator Jets Used to Control Separation on Low-Pressure Turbine Airfoils
NASA Technical Reports Server (NTRS)
Ashpis, David E.; Volino, Ralph J.
2005-01-01
Low-pressure turbine (LPT) airfoils are subject to increasingly stronger pressure gradients as designers impose higher loading in an effort to improve efficiency and lower cost by reducing the number of airfoils in an engine. When the adverse pressure gradient on the suction side of these airfoils becomes strong enough, the boundary layer will separate. Separation bubbles, particularly those that fail to reattach, can result in a significant loss of lift and a subsequent degradation of engine efficiency. The problem is particularly relevant in aircraft engines. Airfoils optimized to produce maximum power under takeoff conditions may still experience boundary layer separation at cruise conditions because of the thinner air and lower Reynolds numbers at altitude. Component efficiency can drop significantly between takeoff and cruise conditions. The decrease is about 2 percent in large commercial transport engines, and it could be as large as 7 percent in smaller engines operating at higher altitudes. Therefore, it is very beneficial to eliminate, or at least reduce, the separation bubble.
System Engineering of Autonomous Space Vehicles
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Johnson, Stephen B.; Trevino, Luis
2014-01-01
Human exploration of the solar system requires fully autonomous systems when travelling more than 5 light minutes from Earth. This autonomy is necessary to manage a large, complex spacecraft with limited crew members and skills available. The communication latency requires the vehicle to deal with events with only limited crew interaction in most cases. The engineering of these systems requires an extensive knowledge of the spacecraft systems, information theory, and autonomous algorithm characteristics. The characteristics of the spacecraft systems must be matched with the autonomous algorithm characteristics to reliably monitor and control the system. This presents a large system engineering problem. Recent work on product-focused, elegant system engineering will be applied to this application, looking at the full autonomy stack, the matching of autonomous systems to spacecraft systems, and the integration of different types of algorithms. Each of these areas will be outlined and a general approach defined for system engineering to provide the optimal solution to the given application context.
Relevance of graduate programs - university viewpoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, E.T.
1978-01-01
Graduate programs in engineering evolved in the early 1900's and saw rapid expansion during, and after World War II, particularly after Sputnik I. The Master's Degree is an extension of Bachelor's work, tending toward specialization and an introduction into inquiry and research. The Doctoral Degree represents considerably more and signifies mastery of a field of learning and training for independent inquiry. The upper one-third of Bachelor's graduates in petroleum engineering should obtain a Master's Degree and the upper 10 to 20% of these should study for the Ph.D. Drilling and production operations involve a rapidly changing and ever more complexmore » technology. Recent innovations of computerized drilling optimization, numerical simulation of production and reservoir engineering problems, and introduction of a host of enhanced oil recovery methods require more petroleum engineers with Master's Degrees, as well as some with Ph.D Degrees. Engineers can receive valuable education through company continuing education programs; however, advanced education is best obtained in an accredited university.« less
NASA Technical Reports Server (NTRS)
Hawkins, J. E.
1980-01-01
A 0.15 scale model of a proposed conformal variable-ramp inlet for the Multirole Fighter was tested from Mach 0.8 to 2.2 at a wide range of angles of attack and sideslip. Inlet ramp angle was varied to optimize ramp angle as a function of engine airflow, Mach number, angle of attack, and angle of sideslip. Several inlet configuration options were investigated to study their effects on inlet operation and to establish the final flight configuration. These variations were cowl sidewall cutback, cowl lip bluntness, boundary layer bleed, and first-ramp leading edge shape. Diagnostic and engine face instrumentation were used to evaluate inlet operation at various inlet stations and at the inlet/engine interface. Pressure recovery and stability of the inlet were satisfactory for the proposed application. On the basis of an engine stability audit of the worst-case instantaneous distortion patterns, no inlet/engine compatibility problems are expected for normal operations.
Galvanic Liquid Applied Coating System For Protection of Embedded Steel Surfaces from Corrosion
NASA Technical Reports Server (NTRS)
Curran, Joseph; Curran, Jerome; Voska, N. (Technical Monitor)
2002-01-01
Corrosion of reinforcing steel in concrete is an insidious problem facing Kennedy Space Center (KSC), other Government Agencies, and the general public. These problems include KSC launch support structures, highway bridge infrastructure, and building structures such as condominium balconies. Due to these problems, the development of a Galvanic Liquid Applied Coating System would be a breakthrough technology having great commercial value for the following industries: Transportation, Infrastructure, Marine Infrastructure, Civil Engineering, and the Construction Industry. This sacrificial coating system consists of a paint matrix that may include metallic components, conducting agents, and moisture attractors. Similar systems have been used in the past with varying degrees of success. These systems have no proven history of effectiveness over the long term. In addition, these types of systems have had limited success overcoming the initial resistance between the concrete/coating interface. The coating developed at KSC incorporates methods proven to overcome the barriers that previous systems could not achieve. Successful development and continued optimization of this breakthrough system would produce great interest in NASA/KSC for corrosion engineering technology and problem solutions. Commercial patents on this technology would enhance KSC's ability to attract industry partners for similar corrosion control applications.
A Gradient Taguchi Method for Engineering Optimization
NASA Astrophysics Data System (ADS)
Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song
2017-10-01
To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.
NASA Astrophysics Data System (ADS)
Ivanov, A.; Chikishev, E.
2017-01-01
The paper is dedicated to a problem of environmental pollution by emissions of hazardous substances with the exhaust gases of internal combustion engines. It is found that application of water-fuel emulsions yields the best results in diesels where production of a qualitative carburetion is the main problem for the organization of working process. During pilot studies the composition of a water-fuel emulsion with the patent held is developed. The developed composition of a water-fuel emulsion provides its stability within 14-18 months depending on mass content of components in it while stability of emulsions’ analogues makes 8-12 months. The mode of operation of pilot unit is described. Methodology and results of pilot study of operation of diesel engine on a water-fuel emulsion are presented. Cutting time of droplet combustion of a water-fuel emulsion improves combustion efficiency and reduces carbon deposition (varnish) on working surfaces. Partial dismantling of the engine after its operating time during 60 engine hours has shown that there is a removal of a carbon deposition in cylinder-piston group which can be observed visually. It is found that for steady operation of the diesel and ensuring decrease in level of emission of hazardous substances the water-fuel emulsion with water concentration of 18-20% is optimal.
[Strategies to choose scaffold materials for tissue engineering].
Gao, Qingdong; Zhu, Xulong; Xiang, Junxi; Lü, Yi; Li, Jianhui
2016-02-01
Current therapies of organ failure or a wide range of tissue defect are often not ideal. Transplantation is the only effective way for long time survival. But it is hard to meet huge patients demands because of donor shortage, immune rejection and other problems. Tissue engineering could be a potential option. Choosing a suitable scaffold material is an essential part of it. According to different sources, tissue engineering scaffold materials could be divided into three types which are natural and its modified materials, artificial and composite ones. The purpose of tissue engineering scaffold is to repair the tissues or organs damage, so could reach the ideal recovery in its function and structure aspect. Therefore, tissue engineering scaffold should even be as close as much to the original tissue or organs in function and structure. We call it "organic scaffold" and this strategy might be the drastic perfect substitute for the tissues or organs in concern. Optimized organization with each kind scaffold materials could make up for biomimetic structure and function of the tissue or organs. Scaffold material surface modification, optimized preparation procedure and cytosine sustained-release microsphere addition should be considered together. This strategy is expected to open new perspectives for tissue engineering. Multidisciplinary approach including material science, molecular biology, and engineering might find the most ideal tissue engineering scaffold. Using the strategy of drawing on each other strength and optimized organization with each kind scaffold material to prepare a multifunctional biomimetic tissue engineering scaffold might be a good method for choosing tissue engineering scaffold materials. Our research group had differentiated bone marrow mesenchymal stem cells into bile canaliculi like cells. We prepared poly(L-lactic acid)/poly(ε-caprolactone) biliary stent. The scaffold's internal played a part in the long-term release of cytokines which mixed with sustained-release nano-microsphere containing growth factors. What's more, the stent internal surface coated with glue/collagen matrix mixing layer containing bFGF and EGF so could supplying the early release of the two cytokines. Finally, combining the poly(L-lactic acid)/poly(ε-caprolactone) biliary stent with the induced cells was the last step for preparing tissue-engineered bile duct. This literature reviewed a variety of the existing tissue engineering scaffold materials and briefly introduced the impact factors on the characteristics of tissue engineering scaffold materials such as preparation procedure, surface modification of scaffold, and so on. We explored the choosing strategy of desired tissue engineering scaffold materials.
Enhanced method of fast re-routing with load balancing in software-defined networks
NASA Astrophysics Data System (ADS)
Lemeshko, Oleksandr; Yeremenko, Oleksandra
2017-11-01
A two-level method of fast re-routing with load balancing in a software-defined network (SDN) is proposed. The novelty of the method consists, firstly, in the introduction of a two-level hierarchy of calculating the routing variables responsible for the formation of the primary and backup paths, and secondly, in ensuring a balanced load of the communication links of the network, which meets the requirements of the traffic engineering concept. The method provides implementation of link, node, path, and bandwidth protection schemes for fast re-routing in SDN. The separation in accordance with the interaction prediction principle along two hierarchical levels of the calculation functions of the primary (lower level) and backup (upper level) routes allowed to abandon the initial sufficiently large and nonlinear optimization problem by transiting to the iterative solution of linear optimization problems of half the dimension. The analysis of the proposed method confirmed its efficiency and effectiveness in terms of obtaining optimal solutions for ensuring balanced load of communication links and implementing the required network element protection schemes for fast re-routing in SDN.
NASA Astrophysics Data System (ADS)
Umam, M. I. H.; Santosa, B.
2018-04-01
Combinatorial optimization has been frequently used to solve both problems in science, engineering, and commercial applications. One combinatorial problems in the field of transportation is to find a shortest travel route that can be taken from the initial point of departure to point of destination, as well as minimizing travel costs and travel time. When the distance from one (initial) node to another (destination) node is the same with the distance to travel back from destination to initial, this problems known to the Traveling Salesman Problem (TSP), otherwise it call as an Asymmetric Traveling Salesman Problem (ATSP). The most recent optimization techniques is Symbiotic Organisms Search (SOS). This paper discuss how to hybrid the SOS algorithm with variable neighborhoods search (SOS-VNS) that can be applied to solve the ATSP problem. The proposed mechanism to add the variable neighborhoods search as a local search is to generate the better initial solution and then we modify the phase of parasites with adapting mechanism of mutation. After modification, the performance of the algorithm SOS-VNS is evaluated with several data sets and then the results is compared with the best known solution and some algorithm such PSO algorithm and SOS original algorithm. The SOS-VNS algorithm shows better results based on convergence, divergence and computing time.
Tidal Turbine Array Optimization Based on the Discrete Particle Swarm Algorithm
NASA Astrophysics Data System (ADS)
Wu, Guo-wei; Wu, He; Wang, Xiao-yong; Zhou, Qing-wei; Liu, Xiao-man
2018-06-01
In consideration of the resource wasted by unreasonable layout scheme of tidal current turbines, which would influence the ratio of cost and power output, particle swarm optimization algorithm is introduced and improved in the paper. In order to solve the problem of optimal array of tidal turbines, the discrete particle swarm optimization (DPSO) algorithm has been performed by re-defining the updating strategies of particles' velocity and position. This paper analyzes the optimization problem of micrositing of tidal current turbines by adjusting each turbine's position, where the maximum value of total electric power is obtained at the maximum speed in the flood tide and ebb tide. Firstly, the best installed turbine number is generated by maximizing the output energy in the given tidal farm by the Farm/Flux and empirical method. Secondly, considering the wake effect, the reasonable distance between turbines, and the tidal velocities influencing factors in the tidal farm, Jensen wake model and elliptic distribution model are selected for the turbines' total generating capacity calculation at the maximum speed in the flood tide and ebb tide. Finally, the total generating capacity, regarded as objective function, is calculated in the final simulation, thus the DPSO could guide the individuals to the feasible area and optimal position. The results have been concluded that the optimization algorithm, which increased 6.19% more recourse output than experience method, can be thought as a good tool for engineering design of tidal energy demonstration.
Space Shuttle Main Engine performance analysis
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1993-01-01
For a number of years, NASA has relied primarily upon periodically updated versions of Rocketdyne's power balance model (PBM) to provide space shuttle main engine (SSME) steady-state performance prediction. A recent computational study indicated that PBM predictions do not satisfy fundamental energy conservation principles. More recently, SSME test results provided by the Technology Test Bed (TTB) program have indicated significant discrepancies between PBM flow and temperature predictions and TTB observations. Results of these investigations have diminished confidence in the predictions provided by PBM, and motivated the development of new computational tools for supporting SSME performance analysis. A multivariate least squares regression algorithm was developed and implemented during this effort in order to efficiently characterize TTB data. This procedure, called the 'gains model,' was used to approximate the variation of SSME performance parameters such as flow rate, pressure, temperature, speed, and assorted hardware characteristics in terms of six assumed independent influences. These six influences were engine power level, mixture ratio, fuel inlet pressure and temperature, and oxidizer inlet pressure and temperature. A BFGS optimization algorithm provided the base procedure for determining regression coefficients for both linear and full quadratic approximations of parameter variation. Statistical information relative to data deviation from regression derived relations was also computed. A new strategy for integrating test data with theoretical performance prediction was also investigated. The current integration procedure employed by PBM treats test data as pristine and adjusts hardware characteristics in a heuristic manner to achieve engine balance. Within PBM, this integration procedure is called 'data reduction.' By contrast, the new data integration procedure, termed 'reconciliation,' uses mathematical optimization techniques, and requires both measurement and balance uncertainty estimates. The reconciler attempts to select operational parameters that minimize the difference between theoretical prediction and observation. Selected values are further constrained to fall within measurement uncertainty limits and to satisfy fundamental physical relations (mass conservation, energy conservation, pressure drop relations, etc.) within uncertainty estimates for all SSME subsystems. The parameter selection problem described above is a traditional nonlinear programming problem. The reconciler employs a mixed penalty method to determine optimum values of SSME operating parameters associated with this problem formulation.
Improving Free-Piston Stirling Engine Specific Power
NASA Technical Reports Server (NTRS)
Briggs, Maxwell Henry
2014-01-01
This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling Engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.
Improving Free-Piston Stirling Engine Specific Power
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.
2015-01-01
This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.
Resonator reset in circuit QED by optimal control for large open quantum systems
NASA Astrophysics Data System (ADS)
Boutin, Samuel; Andersen, Christian Kraglund; Venkatraman, Jayameenakshi; Ferris, Andrew J.; Blais, Alexandre
2017-10-01
We study an implementation of the open GRAPE (gradient ascent pulse engineering) algorithm well suited for large open quantum systems. While typical implementations of optimal control algorithms for open quantum systems rely on explicit matrix exponential calculations, our implementation avoids these operations, leading to a polynomial speedup of the open GRAPE algorithm in cases of interest. This speedup, as well as the reduced memory requirements of our implementation, are illustrated by comparison to a standard implementation of open GRAPE. As a practical example, we apply this open-system optimization method to active reset of a readout resonator in circuit QED. In this problem, the shape of a microwave pulse is optimized such as to empty the cavity from measurement photons as fast as possible. Using our open GRAPE implementation, we obtain pulse shapes, leading to a reset time over 4 times faster than passive reset.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Di Benedetto, Raffaele; Fanti, Michele
2012-01-01
This paper wants to present an integrated approach to Line Balancing and Risk Assessment and a Software Tool named ErgoAnalysis that makes it easy to control the whole production process and produces a Risk Index for the actual work tasks in an Assembly Line. Assembly Line Balancing, or simply Line Balancing, is the problem of assigning operations to workstations along an assembly line, in such a way that the assignment be optimal in some sense. Assembly lines are characterized by production constraints and restrictions due to several aspects such as the nature of the product and the flow of orders. To be able to respond effectively to the needs of production, companies need to frequently change the workload and production models. Each manufacturing process might be quite different from another. To optimize very specific operations, assembly line balancing might utilize a number of methods and the Engineer must consider ergonomic constraints, in order to reduce the risk of WMDSs. Risk Assessment may result very expensive because the Engineer must evaluate it at every change. ErgoAnalysis can reduce cost and improve effectiveness in Risk Assessment during the Line Balancing.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
An investigation of using an RQP based method to calculate parameter sensitivity derivatives
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.
Business process re-engineering a cardiology department.
Bakshi, Syed Murtuza Hussain
2014-01-01
The health care sector is the world's third largest industry and is facing several problems such as excessive waiting times for patients, lack of access to information, high costs of delivery and medical errors. Health care managers seek the help of process re-engineering methods to discover the best processes and to re-engineer existing processes to optimize productivity without compromising on quality. Business process re-engineering refers to the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality and speed. The present study is carried out at a tertiary care corporate hospital with 1000-plus-bed facility. A descriptive study and case study method is used with intensive, careful and complete observation of patient flow, delays, short comings in patient movement and workflow. Data is collected through observations, informal interviews and analyzed by matrix analysis. Flowcharts were drawn for the various work activities of the cardiology department including workflow of the admission process, workflow in the ward and ICCU, workflow of the patient for catheterization laboratory procedure, and in the billing and discharge process. The problems of the existing system were studied and necessary suggestions were recommended to cardiology department module with an illustrated flowchart.
Surawski, Nicholas C; Miljevic, Branka; Bodisco, Timothy A; Brown, Richard J; Ristovski, Zoran D; Ayoko, Godwin A
2013-02-19
Compression ignition (CI) engine design is subject to many constraints, which present a multicriteria optimization problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient but must also deliver low gaseous, particulate, and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming is minimized. Consequently, this study undertakes a multicriteria analysis, which seeks to identify alternative fuels, injection technologies, and combustion strategies that could potentially satisfy these CI engine design constraints. Three data sets are analyzed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of (1) an ethanol fumigation system, (2) alternative fuels (20% biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and (3) various biodiesel fuels made from 3 feedstocks (i.e., soy, tallow, and canola) tested at several blend percentages (20-100%) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20% by energy) at moderate load, high percentage soy blends (60-100%), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most "preferred" solutions to this multicriteria engine design problem. Further research is, however, required to reduce reactive oxygen species (ROS) emissions with alternative fuels and to deliver technologies that do not significantly reduce the median diameter of particle emissions.
Scheeline, Alexander
2017-10-01
Designing a spectrometer requires knowledge of the problem to be solved, the molecules whose properties will contribute to a solution of that problem and skill in many subfields of science and engineering. A seemingly simple problem, design of an ultraviolet, visible, and near-infrared spectrometer, is used to show the reasoning behind the trade-offs in instrument design. Rather than reporting a fully optimized instrument, the Yin and Yang of design choices, leading to decisions about financial cost, materials choice, resolution, throughput, aperture, and layout are described. To limit scope, aspects such as grating blaze, electronics design, and light sources are not presented. The review illustrates the mixture of mathematical rigor, rule of thumb, esthetics, and availability of components that contribute to the art of spectrometer design.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
Integrator Windup Protection-Techniques and a STOVL Aircraft Engine Controller Application
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Narayanaswamy, S.
1997-01-01
Integrators are included in the feedback loop of a control system to eliminate the steady state errors in the commanded variables. The integrator windup problem arises if the control actuators encounter operational limits before the steady state errors are driven to zero by the integrator. The typical effects of windup are large system oscillations, high steady state error, and a delayed system response following the windup. In this study, methods to prevent the integrator windup are examined to provide Integrator Windup Protection (IW) for an engine controller of a Short Take-Off and Vertical Landing (STOVL) aircraft. An unified performance index is defined to optimize the performance of the Conventional Anti-Windup (CAW) and the Modified Anti-Windup (MAW) methods. A modified Genetic Algorithm search procedure with stochastic parameter encoding is implemented to obtain the optimal parameters of the CAW scheme. The advantages and drawbacks of the CAW and MAW techniques are discussed and recommendations are made for the choice of the IWP scheme, given some characteristics of the system.
Research on the optimization of quota design in real estate
NASA Astrophysics Data System (ADS)
Sun, Chunling; Ma, Susu; Zhong, Weichao
2017-11-01
Quota design is one of the effective methods of cost control in real estate development project and widely used in the current real estate development project to control the engineering construction cost, but quota design have many deficiencies in design process. For this purpose, this paper put forward a method to achieve investment control of real estate development project, which combine quota design and value engineering(VE) at the stage of design. Specifically, it’s an optimizing for the structure of quota design. At first, determine the design limits by investment estimate value, then using VE to carry on initial allocation of design limits and gain the functional target cost, finally, consider the whole life cycle cost (LCC) and operational problem in practical application to finish complex correction for the functional target cost. The improved process can control the project cost more effectively. It not only can control investment in a certain range, but also make the project realize maximum value within investment.
Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Sahraei, S.
2016-12-01
Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.
Optimal Solution for an Engineering Applications Using Modified Artificial Immune System
NASA Astrophysics Data System (ADS)
Padmanabhan, S.; Chandrasekaran, M.; Ganesan, S.; patan, Mahamed Naveed Khan; Navakanth, Polina
2017-03-01
An Engineering optimization leads a essential role in several engineering application areas like process design, product design, re-engineering and new product development, etc. In engineering, an awfully best answer is achieved by comparison to some completely different solutions by utilization previous downside information. An optimization algorithms provide systematic associate degreed economical ways that within which of constructing and comparison new design solutions so on understand at best vogue, thus on best solution efficiency and acquire the foremost wonderful design impact. In this paper, a new evolutionary based Modified Artificial Immune System (MAIS) algorithm used to optimize an engineering application of gear drive design. The results are compared with existing design.
NASA Astrophysics Data System (ADS)
Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick
2017-12-01
In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.
CrossTalk. The Journal of Defense Software Engineering. Volume 13, Number 6, June 2000
2000-06-01
Techniques for Efficiently Generating and Testing Software This paper presents a proven process that uses advanced tools to design, develop and test... optimal software. by Keith R. Wegner Large Software Systems—Back to Basics Development methods that work on small problems seem to not scale well to...Ability Requirements for Teamwork: Implications for Human Resource Management, Journal of Management, Vol. 20, No. 2, 1994. 11. Ferguson, Pat, Watts S
Algorithms and Object-Oriented Software for Distributed Physics-Based Modeling
NASA Technical Reports Server (NTRS)
Kenton, Marc A.
2001-01-01
The project seeks to develop methods to more efficiently simulate aerospace vehicles. The goals are to reduce model development time, increase accuracy (e.g.,by allowing the integration of multidisciplinary models), facilitate collaboration by geographically- distributed groups of engineers, support uncertainty analysis and optimization, reduce hardware costs, and increase execution speeds. These problems are the subject of considerable contemporary research (e.g., Biedron et al. 1999; Heath and Dick, 2000).
The Shock and Vibration Bulletin. Part 1. Summaries of Presented Papers
1973-10-01
Mechanical Engineering Department, University of the Negev , Israel It was recently observed [1] that during metal deformation a transient e.m.f. is...turn radiate noise. In many cases, significant contributions can be made toward solution of the overall problem by Uaing properly optimized damping...delineate the role of these resonant sources of secondary sound radiation in a USAF MAC HH-53 helicopter as regards internal cabin noise level (b
2012-02-01
a public service of the RAND Corporation. CHILDREN AND FAMILIES EDUCATION AND THE ARTS ENERGY AND ENVIRONMENT HEALTH AND HEALTH CARE INFRASTRUCTURE...agencies use capability portfolio management to optimize capa- bility investments and minimize risk in meeting the DoD needs across the defense...broadly expose potential problem areas—that is, which requirements (or areas of demand) are at risk of not being met by that particular portfolio
Optimal Paths in Gliding Flight
NASA Astrophysics Data System (ADS)
Wolek, Artur
Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.
NASA Astrophysics Data System (ADS)
Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.
2016-07-01
In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.
Distributed computer system enhances productivity for SRB joint optimization
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.
1987-01-01
Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.
McNerney, Monica P; Watstein, Daniel M; Styczynski, Mark P
2015-09-01
Metabolic engineering is generally focused on static optimization of cells to maximize production of a desired product, though recently dynamic metabolic engineering has explored how metabolic programs can be varied over time to improve titer. However, these are not the only types of applications where metabolic engineering could make a significant impact. Here, we discuss a new conceptual framework, termed "precision metabolic engineering," involving the design and engineering of systems that make different products in response to different signals. Rather than focusing on maximizing titer, these types of applications typically have three hallmarks: sensing signals that determine the desired metabolic target, completely directing metabolic flux in response to those signals, and producing sharp responses at specific signal thresholds. In this review, we will first discuss and provide examples of precision metabolic engineering. We will then discuss each of these hallmarks and identify which existing metabolic engineering methods can be applied to accomplish those tasks, as well as some of their shortcomings. Ultimately, precise control of metabolic systems has the potential to enable a host of new metabolic engineering and synthetic biology applications for any problem where flexibility of response to an external signal could be useful. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womersley, J.; DiGiacomo, N.; Killian, K.
1990-04-01
Detailed detector design has traditionally been divided between engineering optimization for structural integrity and subsequent physicist evaluation. The availability of CAD systems for engineering design enables the tasks to be integrated by providing tools for particle simulation within the CAD system. We believe this will speed up detector design and avoid problems due to the late discovery of shortcomings in the detector. This could occur because of the slowness of traditional verification techniques (such as detailed simulation with GEANT). One such new particle simulation tool is described. It is being used with the I-DEAS CAD package for SSC detector designmore » at Martin-Marietta Astronautics and is to be released through the SSC Laboratory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, D.W.; Linnhoff, B.
In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less
Itaconic acid production in microorganisms.
Zhao, Meilin; Lu, Xinyao; Zong, Hong; Li, Jinyang; Zhuge, Bin
2018-03-01
Itaconic acid, 2-methylidenebutanedioic acid, is a precursor of polymers, chemicals, and fuels. Many fungi can synthesize itaconic acid; Aspergillus terreus and Ustilago maydis produce up to 85 and 53 g l -1 , respectively. Other organisms, including Aspergillus niger and yeasts, have been engineered to produce itaconic acid. However, the titer of itaconic acid is low compared with the analogous major fermentation product, citric acid, for which the yield is > 200 g l -1 . Here, we review two types of pathway for itaconic acid biosynthesis as well as recent advances by metabolic engineering strategies and process optimization to enhance itaconic acid productivity in native producers and heterologous hosts. We also propose further improvements to overcome existing problems.
[Mathematical model of technical equipment of a clinical-diagnostic laboratory].
Bukin, S I; Busygin, D V; Tilevich, M E
1990-01-01
The paper is concerned with the problems of technical equipment of standard clinico-diagnostic laboratories (CDL) in this country. The authors suggest a mathematic model that may minimize expenditures for laboratory studies. The model enables the following problems to be solved: to issue scientifically-based recommendations for technical equipment of CDL; to validate the medico-technical requirements for newly devised items; to select the optimum types of uniform items; to define optimal technical decisions at the stage of the design; to determine the lab assistant's labour productivity and the cost of some investigations; to compute the medical laboratory engineering requirement for treatment and prophylactic institutions of this country.
Distance majorization and its applications.
Chi, Eric C; Zhou, Hua; Lange, Kenneth
2014-08-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.
An accurate, fast, and scalable solver for high-frequency wave propagation
NASA Astrophysics Data System (ADS)
Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.
2017-12-01
In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.
Optimization of In-Cylinder Pressure Filter for Engine Research
2017-06-01
ARL-TR-8034 ● JUN 2017 US Army Research Laboratory Optimization of In-Cylinder Pressure Filter for Engine Research by Kenneth...Laboratory Optimization of In-Cylinder Pressure Filter for Engine Research by Kenneth S Kim, Michael T Szedlmayer, Kurt M Kruger, and Chol-Bum M...
Optimizing communication satellites payload configuration with exact approaches
NASA Astrophysics Data System (ADS)
Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi
2015-12-01
The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.
Performance of discrete heat engines and heat pumps in finite time
Feldmann; Kosloff
2000-05-01
The performance in finite time of a discrete heat engine with internal friction is analyzed. The working fluid of the engine is composed of an ensemble of noninteracting two level systems. External work is applied by changing the external field and thus the internal energy levels. The friction induces a minimal cycle time. The power output of the engine is optimized with respect to time allocation between the contact time with the hot and cold baths as well as the adiabats. The engine's performance is also optimized with respect to the external fields. By reversing the cycle of operation a heat pump is constructed. The performance of the engine as a heat pump is also optimized. By varying the time allocation between the adiabats and the contact time with the reservoir a universal behavior can be identified. The optimal performance of the engine when the cold bath is approaching absolute zero is studied. It is found that the optimal cooling rate converges linearly to zero when the temperature approaches absolute zero.
Design and optimization of interplanetary spacecraft trajectories
NASA Astrophysics Data System (ADS)
McConaghy, Thomas Troy
Scientists involved in space exploration are always looking for ways to accomplish more with their limited budgets. Mission designers can decrease operational costs by crafting trajectories with low launch costs, short time-of-flight, or low propellant requirements. Gravity-assist maneuvers and low-thrust, high-efficiency ion propulsion can be of great help. This dissertation describes advances in methods to design and optimize interplanetary spacecraft trajectories. particularly for missions using gravity-assist maneuvers or low-thrust engines (or both). The first part of this dissertation describes a new, efficient, two-step methodology to design and optimize low-thrust gravity-assist trajectories. Models for the launch vehicle, solar arrays, and engines are introduced and several examples of optimized trajectories are presented. For example, a 3.7-year Earth-Venus-Earth-Mars-Jupiter flyby trajectory with maximized final mass is described. The way that the parameterization of the optimization problem affects convergence speed and reliability is also investigated. The choice of coordinate system is shown to make a significant difference. The second part of this dissertation describes a way to construct Earth-Mars cycler trajectories---periodic orbits that repeatedly encounter Earth and Mars, yet require little or no propellant. We find that well-known cyclers, such as the Aldrin cycler, are special cases of a much larger family of cyclers. In fact, so many new cyclers are found that a comprehensive naming system (nomenclature) is proposed. One particularly promising new cycler, the "ballistic S1L1 cycler" is analyzed in greater detail.
Neural Network and Response Surface Methodology for Rocket Engine Component Optimization
NASA Technical Reports Server (NTRS)
Vaidyanathan, Rajkumar; Papita, Nilay; Shyy, Wei; Tucker, P. Kevin; Griffin, Lisa W.; Haftka, Raphael; Fitz-Coy, Norman; McConnaughey, Helen (Technical Monitor)
2000-01-01
The goal of this work is to compare the performance of response surface methodology (RSM) and two types of neural networks (NN) to aid preliminary design of two rocket engine components. A data set of 45 training points and 20 test points obtained from a semi-empirical model based on three design variables is used for a shear coaxial injector element. Data for supersonic turbine design is based on six design variables, 76 training, data and 18 test data obtained from simplified aerodynamic analysis. Several RS and NN are first constructed using the training data. The test data are then employed to select the best RS or NN. Quadratic and cubic response surfaces. radial basis neural network (RBNN) and back-propagation neural network (BPNN) are compared. Two-layered RBNN are generated using two different training algorithms, namely solverbe and solverb. A two layered BPNN is generated with Tan-Sigmoid transfer function. Various issues related to the training of the neural networks are addressed including number of neurons, error goals, spread constants and the accuracy of different models in representing the design space. A search for the optimum design is carried out using a standard gradient-based optimization algorithm over the response surfaces represented by the polynomials and trained neural networks. Usually a cubic polynominal performs better than the quadratic polynomial but exceptions have been noticed. Among the NN choices, the RBNN designed using solverb yields more consistent performance for both engine components considered. The training of RBNN is easier as it requires linear regression. This coupled with the consistency in performance promise the possibility of it being used as an optimization strategy for engineering design problems.
NASA Astrophysics Data System (ADS)
Martins, T. M.; Kelman, R.; Metello, M.; Ciarlini, A.; Granville, A. C.; Hespanhol, P.; Castro, T. L.; Gottin, V. M.; Pereira, M. V. F.
2015-12-01
The hydroelectric potential of a river is proportional to its head and water flows. Selecting the best development alternative for Greenfield projects watersheds is a difficult task, since it must balance demands for infrastructure, especially in the developing world where a large potential remains unexplored, with environmental conservation. Discussions usually diverge into antagonistic views, as in recent projects in the Amazon forest, for example. This motivates the construction of a computational tool that will support a more qualified debate regarding development/conservation options. HERA provides the optimal head division partition of a river considering technical, economic and environmental aspects. HERA has three main components: (i) pre-processing GIS of topographic and hydrologic data; (ii) automatic engineering and equipment design and budget estimation for candidate projects; (iii) translation of division-partition problem into a mathematical programming model. By integrating an automatic calculation with geoprocessing tools, cloud computation and optimization techniques, HERA makes it possible countless head partition division alternatives to be intrinsically compared - a great advantage with respect to traditional field surveys followed by engineering design methods. Based on optimization techniques, HERA determines which hydro plants should be built, including location, design, technical data (e.g. water head, reservoir area and volume, engineering design (dam, spillways, etc.) and costs). The results can be visualized in the HERA interface, exported to GIS software, Google Earth or CAD systems. HERA has a global scope of application since the main input data area a Digital Terrain Model and water inflows at gauging stations. The objective is to contribute to an increased rationality of decisions by presenting to the stakeholders a clear and quantitative view of the alternatives, their opportunities and threats.
Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers
NASA Astrophysics Data System (ADS)
Lemyre Garneau, Mathieu
A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.
Cankorur-Cetinkaya, Ayca; Dias, Joao M. L.; Kludas, Jana; Slater, Nigel K. H.; Rousu, Juho; Dikicioglu, Duygu
2017-01-01
Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple‐to‐use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257). PMID:28635591
NASA Astrophysics Data System (ADS)
Vasilkin, Andrey
2018-03-01
The more designing solutions at the search stage for design for high-rise buildings can be synthesized by the engineer, the more likely that the final adopted version will be the most efficient and economical. However, in modern market conditions, taking into account the complexity and responsibility of high-rise buildings the designer does not have the necessary time to develop, analyze and compare any significant number of options. To solve this problem, it is expedient to use the high potential of computer-aided designing. To implement automated search for design solutions, it is proposed to develop the computing facilities, the application of which will significantly increase the productivity of the designer and reduce the complexity of designing. Methods of structural and parametric optimization have been adopted as the basis of the computing facilities. Their efficiency in the synthesis of design solutions is shown, also the schemes, that illustrate and explain the introduction of structural optimization in the traditional design of steel frames, are constructed. To solve the problem of synthesis and comparison of design solutions for steel frames, it is proposed to develop the computing facilities that significantly reduces the complexity of search designing and based on the use of methods of structural and parametric optimization.
Optimisation multi-objectif des systemes energetiques
NASA Astrophysics Data System (ADS)
Dipama, Jean
The increasing demand of energy and the environmental concerns related to greenhouse gas emissions lead to more and more private or public utilities to turn to nuclear energy as an alternative for the future. Nuclear power plants are then called to experience large expansion in the coming years. Improved technologies will then be put in place to support the development of these plants. This thesis considers the optimization of the thermodynamic cycle of the secondary loop of Gentilly-2 nuclear power plant in terms of output power and thermal efficiency. In this thesis, investigations are carried out to determine the optimal operating conditions of steam power cycles by the judicious use of the combination of steam extraction at the different stages of the turbines. Whether it is the case of superheating or regeneration, we are confronted in all cases to an optimization problem involving two conflicting objectives, as increasing the efficiency imply the decrease of mechanical work and vice versa. Solving this kind of problem does not lead to unique solution, but to a set of solutions that are tradeoffs between the conflicting objectives. To search all of these solutions, called Pareto optimal solutions, the use of an appropriate optimization algorithm is required. Before starting the optimization of the secondary loop, we developed a thermodynamic model of the secondary loop which includes models for the main thermal components (e.g., turbine, moisture separator-superheater, condenser, feedwater heater and deaerator). This model is used to calculate the thermodynamic state of the steam and water at the different points of the installation. The thermodynamic model has been developed with Matlab and validated by comparing its predictions with the operating data provided by the engineers of the power plant. The optimizer developed in VBA (Visual Basic for Applications) uses an optimization algorithm based on the principle of genetic algorithms, a stochastic optimization method which is very robust and widely used to solve problems usually difficult to handle by traditional methods. Genetic algorithms (GAs) have been used in previous research and proved to be efficient in optimizing heat exchangers networks (HEN) (Dipama et al., 2008). So, HEN have been synthesized to recover the maximum heat in an industrial process. The optimization problem formulated in the context of this work consists of a single objective, namely the maximization of energy recovery. The optimization algorithm developed in this thesis extends the ability of GAs by taking into account several objectives simultaneously. This algorithm provides an innovation in the method of finding optimal solutions, by using a technique which consist of partitioning the solutions space in the form of parallel grids called "watching corridors". These corridors permit to specify areas (the observation corridors) in which the most promising feasible solutions are found and used to guide the search towards optimal solutions. A measure of the progress of the search is incorporated into the optimization algorithm to make it self-adaptive through the use of appropriate genetic operators at each stage of optimization process. The proposed method allows a fast convergence and ensure a diversity of solutions. Moreover, this method gives the algorithm the ability to overcome difficulties associated with optimizing problems with complex Pareto front landscapes (e.g., discontinuity, disjunction, etc.). The multi-objective optimization algorithm has been first validated using numerical test problems found in the literature as well as energy systems optimization problems. Finally, the proposed optimization algorithm has been applied for the optimization of the secondary loop of Gentilly-2 nuclear power plant, and a set of solutions have been found which permit to make the power plant operate in optimal conditions. (Abstract shortened by UMI.)
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER
NASA Technical Reports Server (NTRS)
Scotti, S. J.
1994-01-01
SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.
PDE Nozzle Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Billings, Dana; Turner, James E. (Technical Monitor)
2000-01-01
Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.
Topology Optimization - Engineering Contribution to Architectural Design
NASA Astrophysics Data System (ADS)
Tajs-Zielińska, Katarzyna; Bochenek, Bogdan
2017-10-01
The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one-material problems.