Evaluation of Frameworks for HSCT Design Optimization
NASA Technical Reports Server (NTRS)
Krishnan, Ramki
1998-01-01
This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.
Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.
Otero-Muras, Irene; Banga, Julio R
2017-04-12
In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.
A Framework for Designing Optimal Spacecraft Formations
2002-09-01
3 1. Reference Frame ..................................................................................6 B. SOLVING OPTIMAL CONTROL PROBLEMS ........................................7...spacecraft state. Depending on the model, there may be additional variables in the state, but there will be a minimum of these six. B. SOLVING OPTIMAL CONTROL PROBLEMS Until
Deterministic Design Optimization of Structures in OpenMDAO Framework
NASA Technical Reports Server (NTRS)
Coroneos, Rula M.; Pai, Shantaram S.
2012-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2005-07-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.
A Framework to Design and Optimize Chemical Flooding Processes
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2006-08-31
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2004-11-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.
An framework for robust flight control design using constrained optimization
NASA Technical Reports Server (NTRS)
Palazoglu, A.; Yousefpor, M.; Hess, R. A.
1992-01-01
An analytical framework is described for the design of feedback control systems to meet specified performance criteria in the presence of structured and unstructured uncertainty. Attention is focused upon the linear time invariant, single-input, single-output problem for the purposes of exposition. The framework provides for control of the degree of the stabilizing compensator or controller.
Optimal Aeroacoustic Shape Design Using the Surrogate Management Framework
2004-02-09
wish to thank the IMA for providing a forum for collaboration, as well as Charles Audet and Petros Koumoutsakos for valuable discussions. The authors...17] N. Hansen, D. Mller, and P. Koumoutsakos . Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation...P. Koumoutsakos . Optimal aeroacoustic shape design using approximation modeling. Annual Research Briefs, Center for Turbulence Research, Stanford
ROSE: The Design of a General Tool for the Independent Optimization of Object-Oriented Frameworks
Davis, K.; Philip, B.; Quinlan, D.
1999-05-18
ROSE represents a programmable preprocessor for the highly aggressive optimization of C++ object-oriented frameworks. A fundamental feature of ROSE is that it preserves the semantics, the implicit meaning, of the object-oriented framework's abstractions throughout the optimization process, permitting the framework's abstractions to be recognized and optimizations to capitalize upon the added value of the framework's true meaning. In contrast, a C++ compiler only sees the semantics of the C++ language and thus is severely limited in what optimizations it can introduce. The use of the semantics of the framework's abstractions avoids program analysis that would be incapable of recapturing the framework's full semantics from those of the C++ language implementation of the application or framework. Just as no level of program analysis within the C++ compiler would not be expected to recognize the use of adaptive mesh refinement and introduce optimizations based upon such information. Since ROSE is programmable, additional specialized program analysis is possible which then compliments the semantics of the framework's abstractions. Enabling an optimization mechanism to use the high level semantics of the framework's abstractions together with a programmable level of program analysis (e.g. dependence analysis), at the level of the framework's abstractions, allows for the design of high performance object-oriented frameworks with uniquely tailored sophisticated optimizations far beyond the limits of contemporary serial F0RTRAN 77, C or C++ language compiler technology. In short, faster, more highly aggressive optimizations are possible. The resulting optimizations are literally driven by the framework's definition of its abstractions. Since the abstractions within a framework are of third party design the optimizations are similarly of third party design, specifically independent of the compiler and the applications that use the framework. The interface to ROSE is
NASA Astrophysics Data System (ADS)
Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth
2016-09-01
The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.
A multiobjective optimization framework for multicontaminant industrial water network design.
Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge
2011-07-01
The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view.
A KBE-enabled design framework for cost/weight optimization study of aircraft composite structures
NASA Astrophysics Data System (ADS)
Wang, H.; La Rocca, G.; van Tooren, M. J. L.
2014-10-01
Traditionally, minimum weight is the objective when optimizing airframe structures. This optimization, however, does not consider the manufacturing cost which actually determines the profit of the airframe manufacturer. To this purpose, a design framework has been developed able to perform cost/weight multi-objective optimization of an aircraft component, including large topology variations of the structural configuration. The key element of the proposed framework is a dedicated knowledge based engineering (KBE) application, called multi-model generator, which enables modelling very different product configurations and variants and extract all data required to feed the weight and cost estimation modules, in a fully automated fashion. The weight estimation method developed in this research work uses Finite Element Analysis to calculate the internal stresses of the structural elements and an analytical composite plate sizing method to determine their minimum required thicknesses. The manufacturing cost estimation module was developed on the basis of a cost model available in literature. The capability of the framework was successfully demonstrated by designing and optimizing the composite structure of a business jet rudder. The study case indicates the design framework is able to find the Pareto optimal set for minimum structural weight and manufacturing costin a very quick way. Based on the Pareto set, the rudder manufacturer is in conditions to conduct both internal trade-off studies between minimum weight and minimum cost solutions, as well as to offer the OEM a full set of optimized options to choose, rather than one feasible design.
A Robust and Reliability-Based Optimization Framework for Conceptual Aircraft Wing Design
NASA Astrophysics Data System (ADS)
Paiva, Ricardo Miguel
A robustness and reliability based multidisciplinary analysis and optimization framework for aircraft design is presented. Robust design optimization and Reliability Based Design Optimization are merged into a unified formulation which streamlines the setup of optimization problems and aims at preventing foreseeable implementation issues in uncertainty based design. Surrogate models are evaluated to circumvent the intensive computations resulting from using direct evaluation in nondeterministic optimization. Three types of models are implemented in the framework: quadratic interpolation, regression Kriging and artificial neural networks. Regression Kriging presents the best compromise between performance and accuracy in deterministic wing design problems. The performance of the simultaneous implementation of robustness and reliability is evaluated using simple analytic problems and more complex wing design problems, revealing that performance benefits can still be achieved while satisfying probabilistic constraints rather than the simpler (and not as computationally intensive) robust constraints. The latter are proven to to be unable to follow a reliability constraint as uncertainty in the input variables increases. The computational effort of the reliability analysis is further reduced through the implementation of a coordinate change in the respective optimization sub-problem. The computational tool developed is a stand-alone application and it presents a user-friendly graphical user interface. The multidisciplinary analysis and design optimization tool includes modules for aerodynamics, structural, aeroelastic and cost analysis, that can be used either individually or coupled.
A general framework of marker design with optimal allocation to assess clinical utility.
Tang, Liansheng; Zhou, Xiao-Hua
2013-02-20
This paper proposes a general framework of marker validation designs, which includes most of existing validation designs. The sample size calculation formulas for the proposed general design are derived on the basis of the optimal allocation that minimizes the expected number of treatment failures. The optimal allocation is especially important in the targeted design which is often motivated by preliminary evidence that marker-positive patients respond to one treatment better than the other. Our sample size calculation also takes into account the classification error of a marker. The numerical studies are conducted to investigate the expected reduction on the treatment failures and the relative efficiency between the targeted design and the traditional design based on the optimal ratios. We illustrate the calculation of the optimal allocation and sample sizes through a hypothetical stage II colon cancer trial.
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of
NASA Astrophysics Data System (ADS)
Fink, Wolfgang
2008-04-01
Many systems and processes, both natural and artificial, may be described by parameter-driven mathematical and physical models. We introduce a generally applicable Stochastic Optimization Framework (SOF) that can be interfaced to or wrapped around such models to optimize model outcomes by effectively "inverting" them. The Visual and Autonomous Exploration Systems Research Laboratory (http://autonomy.caltech.edu edu) at the California Institute of Technology (Caltech) has long-term experience in the optimization of multi-dimensional systems and processes. Several examples of successful application of a SOF are reviewed and presented, including biochemistry, robotics, device performance, mission design, parameter retrieval, and fractal landscape optimization. Applications of a SOF are manifold, such as in science, engineering, industry, defense & security, and reconnaissance/exploration. Keywords: Multi-parameter optimization, design/performance optimization, gradient-based steepest-descent methods, local minima, global minimum, degeneracy, overlap parameter distribution, fitness function, stochastic optimization framework, Simulated Annealing, Genetic Algorithms, Evolutionary Algorithms, Genetic Programming, Evolutionary Computation, multi-objective optimization, Pareto-optimal front, trade studies )
OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Gray, Justin S.
2012-01-01
The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.
SBROME: a scalable optimization and module matching framework for automated biosystems design.
Huynh, Linh; Tsoukalas, Athanasios; Köppe, Matthias; Tagkopoulos, Ilias
2013-05-17
The development of a scalable framework for biodesign automation is a formidable challenge given the expected increase in part availability and the ever-growing complexity of synthetic circuits. To allow for (a) the use of previously constructed and characterized circuits or modules and (b) the implementation of designs that can scale up to hundreds of nodes, we here propose a divide-and-conquer Synthetic Biology Reusable Optimization Methodology (SBROME). An abstract user-defined circuit is first transformed and matched against a module database that incorporates circuits that have previously been experimentally characterized. Then the resulting circuit is decomposed to subcircuits that are populated with the set of parts that best approximate the desired function. Finally, all subcircuits are subsequently characterized and deposited back to the module database for future reuse. We successfully applied SBROME toward two alternative designs of a modular 3-input multiplexer that utilize pre-existing logic gates and characterized biological parts.
Materials design by evolutionary optimization of functional groups in metal-organic frameworks
Collins, Sean P.; Daff, Thomas D.; Piotrkowski, Sarah S.; Woo, Tom K.
2016-01-01
A genetic algorithm that efficiently optimizes a desired physical or functional property in metal-organic frameworks (MOFs) by evolving the functional groups within the pores has been developed. The approach has been used to optimize the CO2 uptake capacity of 141 experimentally characterized MOFs under conditions relevant for postcombustion CO2 capture. A total search space of 1.65 trillion structures was screened, and 1035 derivatives of 23 different parent MOFs were identified as having exceptional CO2 uptakes of >3.0 mmol/g (at 0.15 atm and 298 K). Many well-known MOF platforms were optimized, with some, such as MIL-47, having their CO2 adsorption increase by more than 400%. The structures of the high-performing MOFs are provided as potential targets for synthesis. PMID:28138523
Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools
NASA Technical Reports Server (NTRS)
Orr, Stanley A.; Narducci, Robert P.
2009-01-01
A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.
Accounting for Proof Test Data in a Reliability Based Design Optimization Framework
NASA Technical Reports Server (NTRS)
Ventor, Gerharad; Scotti, Stephen J.
2012-01-01
This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.
Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert
2016-01-01
Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778
Multi-Disciplinary Analysis and Optimization Frameworks
NASA Technical Reports Server (NTRS)
Naiman, Cynthia Gutierrez
2009-01-01
Since July 2008, the Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed one major milestone, Define Architecture & Interfaces for Next Generation Open Source MDAO Framework Milestone (9/30/08), and is completing the Generation 1 Framework validation milestone, which is due December 2008. Included in the presentation are: details of progress on developing the Open MDAO framework, modeling and testing the Generation 1 Framework, progress toward establishing partnerships with external parties, and discussion of additional potential collaborations
A framework for simultaneous aerodynamic design optimization in the presence of chaos
NASA Astrophysics Data System (ADS)
Günther, Stefanie; Gauger, Nicolas R.; Wang, Qiqi
2017-01-01
Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimization that is independent of the time domain length, even in the presence of chaos.
A Virtual Reality Framework to Optimize Design, Operation and Refueling of GEN-IV Reactors.
Rizwan-uddin; Nick Karancevic; Stefano Markidis; Joel Dixon; Cheng Luo; Jared Reynolds
2008-04-23
many GEN-IV candidate designs are currently under investigation. Technical issues related to material, safety and economics are being addressed at research laboratories, industry and in academia. After safety, economic feasibility is likely to be the most important crterion in the success of GEN-IV design(s). Lessons learned from the designers and operators of GEN-II (and GEN-III) reactors must play a vital role in achieving both safety and economic feasibility goals.
NASA Technical Reports Server (NTRS)
Karman, Steve L., Jr.
2011-01-01
The Aeronautics Research Mission Directorate (ARMD) sent out an NASA Research Announcement (NRA) for proposals soliciting research and technical development. The proposed research program was aimed at addressing the desired milestones and outcomes of ROA (ROA-2006) Subtopic A.4.1.1 Advanced Computational Methods. The second milestone, SUP.1.06.02 Robust, validated mesh adaptation and error quantification for near field Computational Fluid Dynamics (CFD), was addressed by the proposed research. Additional research utilizing the direct links to geometry through a CAD interface enabled by this work will allow for geometric constraints to be applied and address the final milestone, SUP2.07.06 Constrained low-drag supersonic aerodynamic design capability. The original product of the proposed research program was an integrated system of tools that can be used for the mesh mechanics required for rapid high fidelity analysis and for design of supersonic cruise vehicles. These Euler and Navier-Stokes volume grid manipulation tools were proposed to efficiently use parallel processing. The mesh adaptation provides a systematic approach for achieving demonstrated levels of accuracy in the solutions. NASA chose to fund only the mesh generation/adaptation portion of the proposal. So this report describes the completion of the proposed tasks for mesh creation, manipulation and adaptation as it pertains to sonic boom prediction of supersonic configurations.
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2013-01-01
Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…
Panja, Surajit; Patra, Sourav; Mukherjee, Anirban; Basu, Madhumita; Sengupta, Sanghamitra; Dutta, Pranab K
2013-02-01
A robust synthesis technique is devised for synergism and saturation systems, commonly known as S-systems, for controlling the steady states of the glycolysis-glycogenolysis pathway. The development of the robust biochemical network is essential owing to the fragile response to the perturbation of intrinsic and extrinsic parameters of the nominal S-system. The synthesis problem is formulated in a computationally attractive convex optimization framework. The linear matrix inequalities are framed to aim at the minimization of steady-state error, improvement of robustness, and utilization of minimum control input to the biochemical network.
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
An Optimization Framework for Driver Feedback Systems
Malikopoulos, Andreas; Aguilar, Juan P.
2013-01-01
Modern vehicles have sophisticated electronic control units that can control engine operation with discretion to balance fuel economy, emissions, and power. These control units are designed for specific driving conditions (e.g., different speed profiles for highway and city driving). However, individual driving styles are different and rarely match the specific driving conditions for which the units were designed. In the research reported here, we investigate driving-style factors that have a major impact on fuel economy and construct an optimization framework to optimize individual driving styles with respect to these driving factors. In this context, we construct a set of polynomial metamodels to reflect the responses produced in fuel economy by changing the driving factors. Then, we compare the optimized driving styles to the original driving styles and evaluate the effectiveness of the optimization framework. Finally, we use this proposed framework to develop a real-time feedback system, including visual instructions, to enable drivers to alter their driving styles in response to actual driving conditions to improve fuel efficiency.
Winglet design using multidisciplinary design optimization techniques
NASA Astrophysics Data System (ADS)
Elham, Ali; van Tooren, Michel J. L.
2014-10-01
A quasi-three-dimensional aerodynamic solver is integrated with a semi-analytical structural weight estimation method inside a multidisciplinary design optimization framework to design and optimize a winglet for a passenger aircraft. The winglet is optimized for minimum drag and minimum structural weight. The Pareto front between those two objective functions is found applying a genetic algorithm. The aircraft minimum take-off weight and the aircraft minimum direct operating cost are used to select the best winglets among those on the Pareto front.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
FRANOPP: Framework for analysis and optimization problems user's guide
NASA Technical Reports Server (NTRS)
Riley, K. M.
1981-01-01
Framework for analysis and optimization problems (FRANOPP) is a software aid for the study and solution of design (optimization) problems which provides the driving program and plotting capability for a user generated programming system. In addition to FRANOPP, the programming system also contains the optimization code CONMIN, and two user supplied codes, one for analysis and one for output. With FRANOPP the user is provided with five options for studying a design problem. Three of the options utilize the plot capability and present an indepth study of the design problem. The study can be focused on a history of the optimization process or on the interaction of variables within the design problem.
Structural Analysis in a Conceptual Design Framework
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Robinson, Jay H.; Eldred, Lloyd B.
2012-01-01
Supersonic aircraft designers must shape the outer mold line of the aircraft to improve multiple objectives, such as mission performance, cruise efficiency, and sonic-boom signatures. Conceptual designers have demonstrated an ability to assess these objectives for a large number of candidate designs. Other critical objectives and constraints, such as weight, fuel volume, aeroelastic effects, and structural soundness, are more difficult to address during the conceptual design process. The present research adds both static structural analysis and sizing to an existing conceptual design framework. The ultimate goal is to include structural analysis in the multidisciplinary optimization of a supersonic aircraft. Progress towards that goal is discussed and demonstrated.
An Optimization Framework for Dynamic Hybrid Energy Systems
Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis
2014-03-01
A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problem takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.
Multidisciplinary design and optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1992-01-01
Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. This paper outlines techniques for computing these influences as system design derivatives useful to both judgmental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering optimizations and incorporate their design tools.
NASA Technical Reports Server (NTRS)
Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.
1993-01-01
The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.
Optimization of digital designs
NASA Technical Reports Server (NTRS)
Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor)
2009-01-01
An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.
Optimizing medical data quality based on multiagent web service framework.
Wu, Ching-Seh; Khoury, Ibrahim; Shah, Hemant
2012-07-01
One of the most important issues in e-healthcare information systems is to optimize the medical data quality extracted from distributed and heterogeneous environments, which can extremely improve diagnostic and treatment decision making. This paper proposes a multiagent web service framework based on service-oriented architecture for the optimization of medical data quality in the e-healthcare information system. Based on the design of the multiagent web service framework, an evolutionary algorithm (EA) for the dynamic optimization of the medical data quality is proposed. The framework consists of two main components; first, an EA will be used to dynamically optimize the composition of medical processes into optimal task sequence according to specific quality attributes. Second, a multiagent framework will be proposed to discover, monitor, and report any inconstancy between the optimized task sequence and the actual medical records. To demonstrate the proposed framework, experimental results for a breast cancer case study are provided. Furthermore, to show the unique performance of our algorithm, a comparison with other works in the literature review will be presented.
A New Mathematical Framework for Design Under Uncertainty
2016-05-05
DARPA HR0011-14-1-0060 (Final Report): A new mathematical framework for design under uncertainty ( 9/8/14-12/7/15) PI: George Karniadakis, Brown... mathematically rigorous methods to combine these disparate information sources into a viable framework for the purpose of design and optimization. The
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Constrained Aeroacoustic Shape Optimization Using the Surrogate Management Framework
2003-12-01
412 A. L. Marsden, M. Wang & J. E. Dennis, Jr. rum for collaboration, as well as Charles Audet and Petros Koumoutsakos for valuable discussions...2003 Optimal aeroa- coustic shape design using the surrogate management framework. Submitted for review. MARSDEN, A. L., WANG, M. & KOUMOUTSAKOS , P
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
A Rigorous Framework for Optimization of Expensive Functions by Surrogates
NASA Technical Reports Server (NTRS)
Booker, Andrew J.; Dennis, J. E., Jr.; Frank, Paul D.; Serafini, David B.; Torczon, Virginia; Trosset, Michael W.
1998-01-01
The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which design application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31-variable helicopter rotor blade design example and for a standard optimization test example.
Topology Optimization for Architected Materials Design
NASA Astrophysics Data System (ADS)
Osanov, Mikhail; Guest, James K.
2016-07-01
Advanced manufacturing processes provide a tremendous opportunity to fabricate materials with precisely defined architectures. To fully leverage these capabilities, however, materials architectures must be optimally designed according to the target application, base material used, and specifics of the fabrication process. Computational topology optimization offers a systematic, mathematically driven framework for navigating this new design challenge. The design problem is posed and solved formally as an optimization problem with unit cell and upscaling mechanics embedded within this formulation. This article briefly reviews the key requirements to apply topology optimization to materials architecture design and discusses several fundamental findings related to optimization of elastic, thermal, and fluidic properties in periodic materials. Emerging areas related to topology optimization for manufacturability and manufacturing variations, nonlinear mechanics, and multiscale design are also discussed.
Gandolfi, F; Malleret, L; Sergent, M; Doumenq, P
2015-08-07
The water framework directives (WFD 2000/60/EC and 2013/39/EU) force European countries to monitor the quality of their aquatic environment. Among the priority hazardous substances targeted by the WFD, short chain chlorinated paraffins C10-C13 (SCCPs), still represent an analytical challenge, because few laboratories are nowadays able to analyze them. Moreover, an annual average quality standards as low as 0.4μgL(-1) was set for SCCPs in surface water. Therefore, to test for compliance, the implementation of sensitive and reliable analysis method of SCCPs in water are required. The aim of this work was to address this issue by evaluating automated solid phase micro-extraction (SPME) combined on line with gas chromatography-electron capture negative ionization mass spectrometry (GC/ECNI-MS). Fiber polymer, extraction mode, ionic strength, extraction temperature and time were the most significant thermodynamic and kinetic parameters studied. To determine the suitable factors working ranges, the study of the extraction conditions was first carried out by using a classical one factor-at-a-time approach. Then a mixed level factorial 3×2(3) design was performed, in order to give rise to the most influent parameters and to estimate potential interactions effects between them. The most influent factors, i.e. extraction temperature and duration, were optimized by using a second experimental design, in order to maximize the chromatographic response. At the close of the study, a method involving headspace SPME (HS-SPME) coupled to GC/ECNI-MS is proposed. The optimum extraction conditions were sample temperature 90°C, extraction time 80min, with the PDMS 100μm fiber and desorption at 250°C during 2min. Linear response from 0.2ngmL(-1) to 10ngmL(-1) with r(2)=0.99 and limits of detection and quantification, respectively of 4pgmL(-1) and 120pgmL(-1) in MilliQ water, were achieved. The method proved to be applicable in different types of waters and show key advantages, such
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
OPTIMAL NETWORK TOPOLOGY DESIGN
NASA Technical Reports Server (NTRS)
Yuen, J. H.
1994-01-01
This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.
NASA Astrophysics Data System (ADS)
Bumberger, Jan; Paasche, Hendrik; Dietrich, Peter
2015-11-01
Systematic decomposition and evaluation of existing sensor systems as well as the optimal design of future generations of direct push probes are of high importance for optimized geophysical experiments since the employed equipment is a constrain on the data space. Direct push technologies became established methods in the field of geophysical, geotechnical, hydrogeological, and environmental sciences for the investigation of the near subsurface. By using direct push sensor systems it is possible to measure in-situ parameters with high vertical resolution. Such information is frequently used for quantitative geophysical model calibration of interpretation of geotechnical and hydrological subsurface conditions. Most of the available direct push sensor systems are largely based on empirical testing and consecutively evaluated under field conditions. Approaches suitable to identify specific characteristics and problems of direct push sensor systems have not been established, yet. We develop a general systematic approach for the classification, analysis, and optimization of direct push sensor systems. First, a classification is presented for different existing sensor systems. The following systematic description, which is based on the conceptual decomposition of an existing sensor system into subsystems, is a suitable way to analyze and explore the transfer behavior of the system components and therefore of the complete system. Also, this approach may serve as guideline for the synthesis and the design of new and optimized direct push sensor systems.
Optimal optoacoustic detector design
NASA Technical Reports Server (NTRS)
Rosengren, L.-G.
1975-01-01
Optoacoustic detectors are used to measure pressure changes occurring in enclosed gases, liquids, or solids being excited by intensity or frequency modulated electromagnetic radiation. Radiation absorption spectra, collisional relaxation rates, substance compositions, and reactions can be determined from the time behavior of these pressure changes. Very successful measurements of gaseous air pollutants have, for instance, been performed by using detectors of this type together with different lasers. The measuring instrument consisting of radiation source, modulator, optoacoustic detector, etc. is often called spectrophone. In the present paper, a thorough optoacoustic detector optimization analysis based upon a review of its theory of operation is introduced. New quantitative rules and suggestions explaining how to design detectors with maximal pressure responsivity and over-all sensitivity and minimal background signal are presented.
ELPSA as a Lesson Design Framework
ERIC Educational Resources Information Center
Lowrie, Tom; Patahuddin, Sitti Maesuri
2015-01-01
This paper offers a framework for a mathematics lesson design that is consistent with the way we learn about, and discover, most things in life. In addition, the framework provides a structure for identifying how mathematical concepts and understanding are acquired and developed. This framework is called ELPSA and represents five learning…
Structural Optimization in automotive design
NASA Technical Reports Server (NTRS)
Bennett, J. A.; Botkin, M. E.
1984-01-01
Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.
NASA Technical Reports Server (NTRS)
Axdahl, Erik L.
2015-01-01
Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.
Design of Optimally Robust Control Systems.
1980-01-01
approach is that the optimization framework is an artificial device. While some design constraints can easily be incorporated into a single cost function...indicating that that point was indeed the solution. Also, an intellegent initial guess for k was important in order to avoid being hung up at the double
NASA Technical Reports Server (NTRS)
Allan, Brian; Owens, Lewis
2010-01-01
In support of the Blended-Wing-Body aircraft concept, a new flow control hybrid vane/jet design has been developed for use in a boundary-layer-ingesting (BLI) offset inlet in transonic flows. This inlet flow control is designed to minimize the engine fan-face distortion levels and the first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. This concept represents a potentially enabling technology for quieter and more environmentally friendly transport aircraft. An optimum vane design was found by minimizing the engine fan-face distortion, DC60, and the first five Fourier harmonic half amplitudes, while maximizing the total pressure recovery. The optimal vane design was then used in a BLI inlet wind tunnel experiment at NASA Langley's 0.3-meter transonic cryogenic tunnel. The experimental results demonstrated an 80-percent decrease in DPCPavg, the reduction in the circumferential distortion levels, at an inlet mass flow rate corresponding to the middle of the operational range at the cruise condition. Even though the vanes were designed at a single inlet mass flow rate, they performed very well over the entire inlet mass flow range tested in the wind tunnel experiment with the addition of a small amount of jet flow control. While the circumferential distortion was decreased, the radial distortion on the outer rings at the aerodynamic interface plane (AIP) increased. This was a result of the large boundary layer being distributed from the bottom of the AIP in the baseline case to the outer edges of the AIP when using the vortex generator (VG) vane flow control. Experimental results, as already mentioned, showed an 80-percent reduction of DPCPavg, the circumferential distortion level at the engine fan-face. The hybrid approach leverages strengths of vane and jet flow control devices, increasing inlet performance over a broader operational range with significant reduction in mass flow requirements. Minimal distortion level requirements
NASA Astrophysics Data System (ADS)
Hamza, Karim; Shalaby, Mohamed
2014-09-01
This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.
Talking about Multimedia: A Layered Design Framework.
ERIC Educational Resources Information Center
Taylor, Josie; Sumner, Tamara; Law, Andrew
1997-01-01
Describes a layered analytical framework for discussing design and educational issues that can be shared by the many different stakeholders involved in educational multimedia design and deployment. Illustrates the framework using a detailed analysis of the Galapagos Pilot project of the Open University science faculty which examines the processes…
Design Optimization Toolkit: Users' Manual
Aguilo Valentin, Miguel Alejandro
2014-07-01
The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLAB command window.
Hansborough, L.; Hamm, R.; Stovall, J.; Swenson, D.
1980-01-01
PIGMI (Pion Generator for Medical Irradiations) is a compact linear proton accelerator design, optimized for pion production and cancer treatment use in a hospital environment. Technology developed during a four-year PIGMI Prototype experimental program allows the design of smaller, less expensive, and more reliable proton linacs. A new type of low-energy accelerating structure, the radio-frequency quadrupole (RFQ) has been tested; it produces an exceptionally good-quality beam and allows the use of a simple 30-kV injector. Average axial electric-field gradients of over 9 MV/m have been demonstrated in a drift-tube linac (DTL) structure. Experimental work is underway to test the disk-and-washer (DAW) structure, another new type of accelerating structure for use in the high-energy coupled-cavity linac (CCL). Sufficient experimental and developmental progress has been made to closely define an actual PIGMI. It will consist of a 30-kV injector, and RFQ linac to a proton energy of 2.5 MeV, a DTL linac to 125 MeV, and a CCL linac to the final energy of 650 MeV. The total length of the accelerator is 133 meters. The RFQ and DTL will be driven by a single 440-MHz klystron; the CCL will be driven by six 1320-MHz klystrons. The peak beam current is 28 mA. The beam pulse length is 60 ..mu..s at a 60-Hz repetition rate, resulting in a 100-..mu..A average beam current. The total cost of the accelerator is estimated to be approx. $10 million.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Comparison of Optimal Design Methods in Inverse Problems.
Banks, H T; Holm, Kathleen; Kappel, Franz
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29].
Thompson, Kimberly M; Duintjer Tebbens, Radboud J
2016-07-01
Managing the dynamics of vaccine supply and demand represents a significant challenge with very high stakes. Insufficient vaccine supplies can necessitate rationing, lead to preventable adverse health outcomes, delay the achievements of elimination or eradication goals, and/or pose reputation risks for public health authorities and/or manufacturers. This article explores the dynamics of global vaccine supply and demand to consider the opportunities to develop and maintain optimal global vaccine stockpiles for universal vaccines, characterized by large global demand (for which we use measles vaccines as an example), and nonuniversal (including new and niche) vaccines (for which we use oral cholera vaccine as an example). We contrast our approach with other vaccine stockpile optimization frameworks previously developed for the United States pediatric vaccine stockpile to address disruptions in supply and global emergency response vaccine stockpiles to provide on-demand vaccines for use in outbreaks. For measles vaccine, we explore the complexity that arises due to different formulations and presentations of vaccines, consideration of rubella, and the context of regional elimination goals. We conclude that global health policy leaders and stakeholders should procure and maintain appropriate global vaccine rotating stocks for measles and rubella vaccine now to support current regional elimination goals, and should probably also do so for other vaccines to help prevent and control endemic or epidemic diseases. This work suggests the need to better model global vaccine supplies to improve efficiency in the vaccine supply chain, ensure adequate supplies to support elimination and eradication initiatives, and support progress toward the goals of the Global Vaccine Action Plan.
Multidisciplinary Design Optimization on Conceptual Design of Aero-engine
NASA Astrophysics Data System (ADS)
Zhang, Xiao-bo; Wang, Zhan-xue; Zhou, Li; Liu, Zeng-wen
2016-06-01
In order to obtain better integrated performance of aero-engine during the conceptual design stage, multiple disciplines such as aerodynamics, structure, weight, and aircraft mission are required. Unfortunately, the couplings between these disciplines make it difficult to model or solve by conventional method. MDO (Multidisciplinary Design Optimization) methodology which can well deal with couplings of disciplines is considered to solve this coupled problem. Approximation method, optimization method, coordination method, and modeling method for MDO framework are deeply analyzed. For obtaining the more efficient MDO framework, an improved CSSO (Concurrent Subspace Optimization) strategy which is based on DOE (Design Of Experiment) and RSM (Response Surface Model) methods is proposed in this paper; and an improved DE (Differential Evolution) algorithm is recommended to solve the system-level and discipline-level optimization problems in MDO framework. The improved CSSO strategy and DE algorithm are evaluated by utilizing the numerical test problem. The result shows that the efficiency of improved methods proposed by this paper is significantly increased. The coupled problem of VCE (Variable Cycle Engine) conceptual design is solved by utilizing improved CSSO strategy, and the design parameter given by improved CSSO strategy is better than the original one. The integrated performance of VCE is significantly improved.
NASA Astrophysics Data System (ADS)
Mikhalchenko, V. V.; Rubanik, Yu T.
2016-10-01
The work is devoted to the problem of cost-effective adaptation of coal mines to the volatile and uncertain market conditions. Conceptually it can be achieved through alignment of the dynamic characteristics of the coal mining system and power spectrum of market demand for coal product. In practical terms, this ensures the viability and competitiveness of coal mines. Transformation of dynamic characteristics is to be done by changing the structure of production system as well as corporate, logistics and management processes. The proposed methods and algorithms of control are aimed at the development of the theoretical foundations of adaptive optimization as basic methodology for coal mine enterprise management in conditions of high variability and uncertainty of economic and natural environment. Implementation of the proposed methodology requires a revision of the basic principles of open coal mining enterprises design.
NASA Technical Reports Server (NTRS)
Santala, T.; Sabol, R.; Carbajal, B. G.
1978-01-01
The minimum cost per unit of power output from flat plate solar modules can most likely be achieved through efficient packaging of higher efficiency solar cells. This paper outlines a module optimization method which is broadly applicable, and illustrates the potential results achievable from a specific high efficiency tandem junction (TJ) cell. A mathematical model is used to assess the impact of various factors influencing the encapsulated cell and packing efficiency. The optimization of the packing efficiency is demonstrated. The effect of encapsulated cell and packing efficiency on the module add-on cost is shown in a nomograph form.
Multidisciplinary Optimization Methods for Preliminary Design
NASA Technical Reports Server (NTRS)
Korte, J. J.; Weston, R. P.; Zang, T. A.
1997-01-01
An overview of multidisciplinary optimization (MDO) methodology and two applications of this methodology to the preliminary design phase are presented. These applications are being undertaken to improve, develop, validate and demonstrate MDO methods. Each is presented to illustrate different aspects of this methodology. The first application is an MDO preliminary design problem for defining the geometry and structure of an aerospike nozzle of a linear aerospike rocket engine. The second application demonstrates the use of the Framework for Interdisciplinary Design Optimization (FIDO), which is a computational environment system, by solving a preliminary design problem for a High-Speed Civil Transport (HSCT). The two sample problems illustrate the advantages to performing preliminary design with an MDO process.
Dynamic optimization and adaptive controller design
NASA Astrophysics Data System (ADS)
Inamdar, S. R.
2010-10-01
In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.
Multifidelity Analysis and Optimization for Supersonic Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory
2010-01-01
Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.
Intelligent Frameworks for Instructional Design.
ERIC Educational Resources Information Center
Spector, J. Michael; And Others
Many researchers are attempting to develop automated instructional development systems to guide subject matter experts through the lengthy and difficult process of courseware development. Because the targeted users often lack instructional design expertise, a great deal of emphasis has been placed on the use of artificial intelligence (AI) to…
Intelligent Frameworks for Instructional Design.
ERIC Educational Resources Information Center
Spector, J. Michael; And Others
1992-01-01
Presents a taxonomy describing various uses of artificial intelligence techniques in automated instructional development systems. Instructional systems development is discussed in relation to the design of computer-based instructional courseware; two systems being developed at the Air Force Armstrong Laboratory are reviewed; and further research…
Design optimization of transonic airfoils
NASA Technical Reports Server (NTRS)
Joh, C.-Y.; Grossman, B.; Haftka, R. T.
1991-01-01
Numerical optimization procedures were considered for the design of airfoils in transonic flow based on the transonic small disturbance (TSD) and Euler equations. A sequential approximation optimization technique was implemented with an accurate approximation of the wave drag based on the Nixon's coordinate straining approach. A modification of the Euler surface boundary conditions was implemented in order to efficiently compute design sensitivities without remeshing the grid. Two effective design procedures producing converged designs in approximately 10 global iterations were developed: interchanging the role of the objective function and constraint and the direct lift maximization with move limits which were fixed absolute values of the design variables.
Optimizing exchanger design early
Lacunza, M.; Vaschetti, G.; Campana, H.
1987-08-01
It is not practical for process engineers and designers to make a rigorous economic evaluation for each component of a process due to the loss of time and money. But, it's very helpful and useful to have a method for a quick design evaluation of heat exchangers, considering their important contribution to the total fixed investment in a process plant. This article is devoted to this subject, and the authors present a method that has been proved in some design cases. Linking rigorous design procedures with a quick cost-estimation method provides a good technique for obtaining the right heat exchanger. The cost will be appropriate, sometimes not the lowest because of design restrictions, but a good approach to the optimum in an earlier process design stage. The authors intend to show the influence of the design variables in a shell and tube heat exchanger on capital investment, or conversely, taking into account the general limiting factors of the process such as thermodynamics, operability, corrosion, etc., and/or from the mechanical design of the calculated unit. The last is a special consideration for countries with no access to industrial technology or with difficulties in obtaining certain construction materials or equipment.
Habitat Design Optimization and Analysis
NASA Technical Reports Server (NTRS)
SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.
2006-01-01
Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.
Initial Multidisciplinary Design and Analysis Framework
NASA Technical Reports Server (NTRS)
Ozoroski, L. P.; Geiselhart, K. A.; Padula, S. L.; Li, W.; Olson, E. D.; Campbell, R. L.; Shields, E. W.; Berton, J. J.; Gray, J. S.; Jones, S. M.; Naiman, C. G.; Seidel, J. A.; Moore, K. T.; Naylor, B. A.; Townsend, S.
2010-01-01
Within the Supersonics (SUP) Project of the Fundamental Aeronautics Program (FAP), an initial multidisciplinary design & analysis framework has been developed. A set of low- and intermediate-fidelity discipline design and analysis codes were integrated within a multidisciplinary design and analysis framework and demonstrated on two challenging test cases. The first test case demonstrates an initial capability to design for low boom and performance. The second test case demonstrates rapid assessment of a well-characterized design. The current system has been shown to greatly increase the design and analysis speed and capability, and many future areas for development were identified. This work has established a state-of-the-art capability for immediate use by supersonic concept designers and systems analysts at NASA, while also providing a strong base to build upon for future releases as more multifidelity capabilities are developed and integrated.
Design optimization of rod shaped IPMC actuator
NASA Astrophysics Data System (ADS)
Ruiz, S. A.; Mead, B.; Yun, H.; Yim, W.; Kim, K. J.
2013-04-01
Ionic polymer-metal composites (IPMCs) are some of the most well-known electro-active polymers. This is due to their large deformation provided a relatively low voltage source. IPMCs have been acknowledged as a potential candidate for biomedical applications such as cardiac catheters and surgical probes; however, there is still no existing mass manufacturing of IPMCs. This study intends to provide a theoretical framework which could be used to design practical purpose IPMCs depending on the end users interest. By explicitly coupling electrostatics, transport phenomenon, and solid mechanics, design optimization is conducted on a simulation in order to provide conceptual motivation for future designs. Utilizing a multi-physics analysis approach on a three dimensional cylinder and tube type IPMC provides physically accurate results for time dependent end effector displacement given a voltage source. Simulations are conducted with the finite element method and are also validated with empirical evidences. Having an in-depth understanding of the physical coupling provides optimal design parameters that cannot be altered from a standard electro-mechanical coupling. These parameters are altered in order to determine optimal designs for end-effector displacement, maximum force, and improved mobility with limited voltage magnitude. Design alterations are conducted on the electrode patterns in order to provide greater mobility, electrode size for efficient bending, and Nafion diameter for improved force. The results of this study will provide optimal design parameters of the IPMC for different applications.
Optimally designing games for behavioural research.
Rafferty, Anna N; Zaharia, Matei; Griffiths, Thomas L
2014-07-08
Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision.
Optimally designing games for behavioural research
Rafferty, Anna N.; Zaharia, Matei; Griffiths, Thomas L.
2014-01-01
Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision. PMID:25002821
Parallel Object-Oriented Framework Optimization
Quinlan, D
2001-05-01
Object-oriented libraries arise naturally from the increasing complexity of developing related scientific applications. The optimization of the use of libraries within scientific applications is one of many high-performance optimization, and is the subject of this paper. This type of optimization can have significant potential because it can either reduce the overhead of calls to a library, specialize the library calls given the context of their use within the application, or use the semantics of the library calls to locally rewrite sections of the application. This type of optimization is only now becoming an active area of research. The optimization of the use of libraries within scientific applications is particularly attractive because it maps to the extensive use of libraries within numerous large existing scientific applications sharing common problem domains. This paper presents an approach toward the optimization of parallel object-oriented libraries. ROSE[1] is a tool for building source-to-source preprocessors, ROSETTA is a tool for defining the grammars used within ROSE. The definition of the grammars directly determines what can be recognized at compile time. ROSETTA permits grammars to be automatically generated which are specific to the identification of abstractions introduced within object-oriented libraries. Thus the semantics of complex abstractions defined outside of the C++ language can be leveraged at compile time to introduce library specific optimizations. The details of the optimizations performed are not a part of this paper and are up to the library developer to define using ROSETTA and ROSE to build such an optimizing preprocessor. Within performance optimizations, if they are to be automated, the problems of automatically locating where such optimizations can be done are significant and most often overlooked. Note that a novel part of this work is the degree of automation. Thus library developers can be expected to be able to build their
Neural Meta-Memes Framework for Combinatorial Optimization
NASA Astrophysics Data System (ADS)
Song, Li Qin; Lim, Meng Hiot; Ong, Yew Soon
In this paper, we present a Neural Meta-Memes Framework (NMMF) for combinatorial optimization. NMMF is a framework which models basic optimization algorithms as memes and manages them dynamically when solving combinatorial problems. NMMF encompasses neural networks which serve as the overall planner/coordinator to balance the workload between memes. We show the efficacy of the proposed NMMF through empirical study on a class of combinatorial problem, the quadratic assignment problem (QAP).
Advanced transport design using multidisciplinary design optimization
NASA Technical Reports Server (NTRS)
Barnum, Jennifer; Bathras, Curt; Beene, Kirk; Bush, Michael; Kaupin, Glenn; Lowe, Steve; Sobieski, Ian; Tingen, Kelly; Wells, Douglas
1991-01-01
This paper describes the results of the first implementation of multidisciplinary design optimisation (MDO) techniques by undergraduates ina design course. The objective of the work was to design a civilian transport aircraft of the Boeing 777 class. The first half of the two semester design course consisted of application of traditional sizing methods and techniques to form a baseline aircraft. MDO techniques were then applied to this baseline design. This paper describes the evolution of the design with special emphasis on the application of MDO techniques, and presents the results of four iterations through the design space. Minimization of take-off gross weight was the goal of the optimization process. The resultant aircraft derived from the MDO procedure weighed approximately 13,382 lbs (2.57 percent) less than the baseline aircraft.
Design optimization of space structures
NASA Astrophysics Data System (ADS)
Felippa, Carlos
1991-11-01
The topology-shape-size optimization of space structures is investigated through Kikuchi's homogenization method. The method starts from a 'design domain block,' which is a region of space into which the structure is to materialize. This domain is initially filled with a finite element mesh, typically regular. Force and displacement boundary conditions corresponding to applied loads and supports are applied at specific points in the domain. An optimal structure is to be 'carved out' of the design under two conditions: (1) a cost function is to be minimized, and (2) equality or inequality constraints are to be satisfied. The 'carving' process is accomplished by letting microstructure holes develop and grow in elements during the optimization process. These holes have a rectangular shape in two dimensions and a cubical shape in three dimensions, and may also rotate with respect to the reference axes. The properties of the perforated element are obtained through an homogenization procedure. Once a hole reaches the volume of the element, that element effectively disappears. The project has two phases. In the first phase the method was implemented as the combination of two computer programs: a finite element module, and an optimization driver. In the second part, focus is on the application of this technique to planetary structures. The finite element part of the method was programmed for the two-dimensional case using four-node quadrilateral elements to cover the design domain. An element homogenization technique different from that of Kikuchi and coworkers was implemented. The optimization driver is based on an augmented Lagrangian optimizer, with the volume constraint treated as a Courant penalty function. The optimizer has to be especially tuned to this type of optimization because the number of design variables can reach into the thousands. The driver is presently under development.
Optimal designs for copula models
Perrone, E.; Müller, W.G.
2016-01-01
Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
A computational molecular design framework for crosslinked polymer networks
Eslick, J.C.; Ye, Q.; Park, J.; Topp, E.M.; Spencer, P.; Camarda, K.V.
2013-01-01
Crosslinked polymers are important in a very wide range of applications including dental restorative materials. However, currently used polymeric materials experience limited durability in the clinical oral environment. Researchers in the dental polymer field have generally used a time-consuming experimental trial-and-error approach to the design of new materials. The application of computational molecular design (CMD) to crosslinked polymer networks has the potential to facilitate development of improved polymethacrylate dental materials. CMD uses quantitative structure property relations (QSPRs) and optimization techniques to design molecules possessing desired properties. This paper describes a mathematical framework which provides tools necessary for the application of CMD to crosslinked polymer systems. The novel parts of the system include the data structures used, which allow for simple calculation of structural descriptors, and the formulation of the optimization problem. A heuristic optimization method, Tabu Search, is used to determine candidate monomers. Use of a heuristic optimization algorithm makes the system more independent of the types of QSPRs used, and more efficient when applied to combinatorial problems. A software package has been created which provides polymer researchers access to the design framework. A complete example of the methodology is provided for polymethacrylate dental materials. PMID:23904665
Optimal stochastic control in natural resource management: Framework and examples
Williams, B.K.
1982-01-01
A framework is presented for the application of optimal control methods to natural resource problems. An expression of the optimal control problem appropriate for renewable natural resources is given and its application to Markovian systems is presented in some detail. Three general approaches are outlined for determining optimal control of infinite time horizon systems and three examples from the natural resource literature are used for illustration.
First-Order Frameworks for Managing Models in Engineering Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natlia M.; Lewis, Robert Michael
2000-01-01
Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.
Framework for computationally efficient optimal irrigation scheduling using ant colony optimization
Technology Transfer Automated Retrieval System (TEKTRAN)
A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...
Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification through PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.
Heat Sink Design and Optimization
2015-12-01
hot surfaces to cooler ambient air. Typically, the fins are oriented in a way to permit a natural convection air draft to flow upward through...main objective. Heat transfer from the heat sink consists of radiation and convection from both the intra-fin passages and the unshielded...Natural convection Radiation Design Modeling Optimization 16. SECURITY CLASSIFICATION OF: 17
Singularities in Optimal Structural Design
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Guptill, J. D.; Berke, L.
1992-01-01
Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.
Singularities in optimal structural design
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Guptill, J. D.; Berke, L.
1992-01-01
Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.
A Framework for Optimizing the Placement of Tidal Turbines
NASA Astrophysics Data System (ADS)
Nelson, K. S.; Roberts, J.; Jones, C.; James, S. C.
2013-12-01
Power generation with marine hydrokinetic (MHK) current energy converters (CECs), often in the form of underwater turbines, is receiving growing global interest. Because of reasonable investment, maintenance, reliability, and environmental friendliness, this technology can contribute to national (and global) energy markets and is worthy of research investment. Furthermore, in remote areas, small-scale MHK energy from river, tidal, or ocean currents can provide a local power supply. However, little is known about the potential environmental effects of CEC operation in coastal embayments, estuaries, or rivers, or of the cumulative impacts of these devices on aquatic ecosystems over years or decades of operation. There is an urgent need for practical, accessible tools and peer-reviewed publications to help industry and regulators evaluate environmental impacts and mitigation measures, while establishing best sitting and design practices. Sandia National Laboratories (SNL) and Sea Engineering, Inc. (SEI) have investigated the potential environmental impacts and performance of individual tidal energy converters (TECs) in Cobscook Bay, ME; TECs are a subset of CECs that are specifically deployed in tidal channels. Cobscook Bay is the first deployment location of Ocean Renewable Power Company's (ORPC) TidGenTM unit. One unit is currently in place with four more to follow. Together, SNL and SEI built a coarse-grid, regional-scale model that included Cobscook Bay and all other landward embayments using the modeling platform SNL-EFDC. Within SNL-EFDC tidal turbines are represented using a unique set of momentum extraction, turbulence generation, and turbulence dissipation equations at TEC locations. The global model was then coupled to a local-scale model that was centered on the proposed TEC deployment locations. An optimization frame work was developed that used the refined model to determine optimal device placement locations that maximized array performance. Within the
A Nonconvex Optimization Framework for Low Rank Matrix Estimation*
Zhao, Tuo; Wang, Zhaoran; Liu, Han
2016-01-01
We study the estimation of low rank matrices via nonconvex optimization. Compared with convex relaxation, nonconvex optimization exhibits superior empirical performance for large scale instances of low rank matrix estimation. However, the understanding of its theoretical guarantees are limited. In this paper, we define the notion of projected oracle divergence based on which we establish sufficient conditions for the success of nonconvex optimization. We illustrate the consequences of this general framework for matrix sensing. In particular, we prove that a broad class of nonconvex optimization algorithms, including alternating minimization and gradient-type methods, geometrically converge to the global optimum and exactly recover the true low rank matrices under standard conditions. PMID:28316458
Design framework for entanglement-distribution switching networks
NASA Astrophysics Data System (ADS)
Drost, Robert J.; Brodsky, Michael
2016-09-01
The distribution of quantum entanglement appears to be an important component of applications of quantum communications and networks. The ability to centralize the sourcing of entanglement in a quantum network can provide for improved efficiency and enable a variety of network structures. A necessary feature of an entanglement-sourcing network node comprising several sources of entangled photons is the ability to reconfigurably route the generated pairs of photons to network neighbors depending on the desired entanglement sharing of the network users at a given time. One approach to such routing is the use of a photonic switching network. The requirements for an entanglement distribution switching network are less restrictive than for typical conventional applications, leading to design freedom that can be leveraged to optimize additional criteria. In this paper, we present a mathematical framework defining the requirements of an entanglement-distribution switching network. We then consider the design of such a switching network using a number of 2 × 2 crossbar switches, addressing the interconnection of these switches and efficient routing algorithms. In particular, we define a worst-case loss metric and consider 6 × 6, 8 × 8, and 10 × 10 network designs that optimize both this metric and the number of crossbar switches composing the network. We pay particular attention to the 10 × 10 network, detailing novel results proving the optimality of the proposed design. These optimized network designs have great potential for use in practical quantum networks, thus advancing the concept of quantum networks toward reality.
An Evolutionary Optimization Framework for Neural Networks and Neuromorphic Architectures
Schuman, Catherine D; Plank, James; Disney, Adam; Reynolds, John
2016-01-01
As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.
Optimizing Trial Designs for Targeted Therapies
Beckman, Robert A.; Burman, Carl-Fredrik; König, Franz; Stallard, Nigel; Posch, Martin
2016-01-01
An important objective in the development of targeted therapies is to identify the populations where the treatment under consideration has positive benefit risk balance. We consider pivotal clinical trials, where the efficacy of a treatment is tested in an overall population and/or in a pre-specified subpopulation. Based on a decision theoretic framework we derive optimized trial designs by maximizing utility functions. Features to be optimized include the sample size and the population in which the trial is performed (the full population or the targeted subgroup only) as well as the underlying multiple test procedure. The approach accounts for prior knowledge of the efficacy of the drug in the considered populations using a two dimensional prior distribution. The considered utility functions account for the costs of the clinical trial as well as the expected benefit when demonstrating efficacy in the different subpopulations. We model utility functions from a sponsor’s as well as from a public health perspective, reflecting actual civil interests. Examples of optimized trial designs obtained by numerical optimization are presented for both perspectives. PMID:27684573
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
Research on optimization-based design
NASA Technical Reports Server (NTRS)
Balling, R. J.; Parkinson, A. R.; Free, J. C.
1989-01-01
Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.
When Playing Meets Learning: Methodological Framework for Designing Educational Games
NASA Astrophysics Data System (ADS)
Linek, Stephanie B.; Schwarz, Daniel; Bopp, Matthias; Albert, Dietrich
Game-based learning builds upon the idea of using the motivational potential of video games in the educational context. Thus, the design of educational games has to address optimizing enjoyment as well as optimizing learning. Within the EC-project ELEKTRA a methodological framework for the conceptual design of educational games was developed. Thereby state-of-the-art psycho-pedagogical approaches were combined with insights of media-psychology as well as with best-practice game design. This science-based interdisciplinary approach was enriched by enclosed empirical research to answer open questions on educational game-design. Additionally, several evaluation-cycles were implemented to achieve further improvements. The psycho-pedagogical core of the methodology can be summarized by the ELEKTRA's 4Ms: Macroadaptivity, Microadaptivity, Metacognition, and Motivation. The conceptual framework is structured in eight phases which have several interconnections and feedback-cycles that enable a close interdisciplinary collaboration between game design, pedagogy, cognitive science and media psychology.
Synergy: A language and framework for robot design
NASA Astrophysics Data System (ADS)
Katragadda, Lalitesh Kumar
Due to escalation in complexity, capability and application, robot design is increasingly difficult. A design environment can automate many design tasks, relieving the designer's burden. Prior to robot development, designers compose a robot from existing or custom developed components, simulate performance, optimize configuration and parameters, and write software for the robot. Robot designers customize these facets to the robot using a variety of software ranging from spreadsheets to C code to CAD tools. Valuable resources are expended, and very little of this expertise and development is reusable. This research begins with the premise that a language to comprehensively represent robots is lacking and that the aforementioned design tasks can be automated once such a language exists. This research proposes and demonstrates the following thesis: "A language to represent robots, along with a framework to generate simulations, optimize designs and generate control software, increases the effectiveness of design." Synergy is the software developed in this research to reflect this philosophy. Synergy was prototyped and demonstrated in the context of lunar rover design, a challenging real-world problem with multiple requirements and a broad design space. Synergy was used to automatically optimize robot parameters and select parts to generate effective designs, while meeting constraints of the embedded components and sub-systems. The generated designs are superior in performance and consistency when compared to designs by teams of designers using the same knowledge. Using a single representation, multiple designs are generated for four distinct lunar exploration objectives. Synergy uses the same representation to auto-generate landing simulations and simultaneously generate control software for the landing. Synergy consists of four software agents. A database and spreadsheet agent compiles the design and component information, generating component interconnections and
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
A duality framework for stochastic optimal control of complex systems
Malikopoulos, Andreas A.
2016-01-01
In this study, we address the problem of minimizing the long-run expected average cost of a complex system consisting of interactive subsystems. We formulate a multiobjective optimization problem of the one-stage expected costs of the subsystems and provide a duality framework to prove that the control policy yielding the Pareto optimal solution minimizes the average cost criterion of the system. We provide the conditions of existence and a geometric interpretation of the solution. For practical situations having constraints consistent with those studied here, our results imply that the Pareto control policy may be of value when we seek to derive online the optimal control policy in complex systems.
Optimal design of airlift fermenters
Moresi, M.
1981-11-01
In this article a modeling of a draft-tube airlift fermenter (ALF) based on perfect back-mixing of liquid and plugflow for gas bubbles has been carried out to optimize the design and operation of fermentation units at different working capacities. With reference to a whey fermentation by yeasts the economic optimization has led to a slim ALF with an aspect ratio of about 15. As far as power expended per unit of oxygen transfer is concerned, the responses of the model are highly influenced by kLa. However, a safer use of the model has been suggested in order to assess the feasibility of the fermentation process under study. (Refs. 39).
Globally optimal trial design for local decision making.
Eckermann, Simon; Willan, Andrew R
2009-02-01
Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials.
A Framework for Optimal Control Allocation with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc
2010-01-01
Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.
Effect of framework design on crown failure.
Bonfante, Estevam A; da Silva, Nelson R F A; Coelho, Paulo G; Bayardo-González, Daniel E; Thompson, Van P; Bonfante, Gerson
2009-04-01
This study evaluated the effect of core-design modification on the characteristic strength and failure modes of glass-infiltrated alumina (In-Ceram) (ICA) compared with porcelain fused to metal (PFM). Premolar crowns of a standard design (PFMs and ICAs) or with a modified framework design (PFMm and ICAm) were fabricated, cemented on dies, and loaded until failure. The crowns were loaded at 0.5 mm min(-1) using a 6.25 mm tungsten-carbide ball at the central fossa. Fracture load values were recorded and fracture analysis of representative samples were evaluated using scanning electron microscopy. Probability Weibull curves with two-sided 90% confidence limits were calculated for each group and a contour plot of the characteristic strength was obtained. Design modification showed an increase in the characteristic strength of the PFMm and ICAm groups, with PFM groups showing higher characteristic strength than ICA groups. The PFMm group showed the highest characteristic strength among all groups. Fracture modes of PFMs and of PFMm frequently reached the core interface at the lingual cusp, whereas ICA exhibited bulk fracture through the alumina core. Core-design modification significantly improved the characteristic strength for PFM and for ICA. The PFM groups demonstrated higher characteristic strength than both ICA groups combined.
ERIC Educational Resources Information Center
Triantafyllakos, George; Palaigeorgiou, George; Tsoukalas, Ioannis A.
2011-01-01
In this paper, we present a framework for the development of collaborative design games that can be employed in participatory design sessions with students for the design of educational applications. The framework is inspired by idea generation theory and the design games literature, and guides the development of board games which, through the use…
Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework
Alicia Hofler, Pavel Evtushenko, Frank Marhauser
2009-09-01
Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results.
Optimal Designs for the Rasch Model
ERIC Educational Resources Information Center
Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer
2012-01-01
In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…
An optimal structural design algorithm using optimality criteria
NASA Technical Reports Server (NTRS)
Taylor, J. E.; Rossow, M. P.
1976-01-01
An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.
Analysis and System Design Framework for Infrared Spatial Heterodyne Spectrometers
Cooke, B.J.; Smith, B.W.; Laubscher, B.E.; Villeneuve, P.V.; Briles, S.D.
1999-04-05
The authors present a preliminary analysis and design framework developed for the evaluation and optimization of infrared, Imaging Spatial Heterodyne Spectrometer (SHS) electro-optic systems. Commensurate with conventional interferometric spectrometers, SHS modeling requires an integrated analysis environment for rigorous evaluation of system error propagation due to detection process, detection noise, system motion, retrieval algorithm and calibration algorithm. The analysis tools provide for optimization of critical system parameters and components including : (1) optical aperture, f-number, and spectral transmission, (2) SHS interferometer grating and Littrow parameters, and (3) image plane requirements as well as cold shield, optical filtering, and focal-plane dimensions, pixel dimensions and quantum efficiency, (4) SHS spatial and temporal sampling parameters, and (5) retrieval and calibration algorithm issues.
A Communication-Optimal Framework for Contracting Distributed Tensors
Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-11-16
Tensor contractions are extremely compute intensive generalized matrix multiplication operations encountered in many computational science fields, such as quantum chemistry and nuclear physics. Unlike distributed matrix multiplication, which has been extensively studied, limited work has been done in understanding distributed tensor contractions. In this paper, we characterize distributed tensor contraction algorithms on torus networks. We develop a framework with three fundamental communication operators to generate communication-efficient contraction algorithms for arbitrary tensor contractions. We show that for a given amount of memory per processor, our framework is communication optimal for all tensor contractions. We demonstrate performance and scalability of our framework on up to 262,144 cores of BG/Q supercomputer using five tensor contraction examples.
Optimal design of compact spur gear reductions
NASA Technical Reports Server (NTRS)
Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.
1992-01-01
The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.
Pathway Design, Engineering, and Optimization.
Garcia-Ruiz, Eva; HamediRad, Mohammad; Zhao, Huimin
2016-09-16
The microbial metabolic versatility found in nature has inspired scientists to create microorganisms capable of producing value-added compounds. Many endeavors have been made to transfer and/or combine pathways, existing or even engineered enzymes with new function to tractable microorganisms to generate new metabolic routes for drug, biofuel, and specialty chemical production. However, the success of these pathways can be impeded by different complications from an inherent failure of the pathway to cell perturbations. Pursuing ways to overcome these shortcomings, a wide variety of strategies have been developed. This chapter will review the computational algorithms and experimental tools used to design efficient metabolic routes, and construct and optimize biochemical pathways to produce chemicals of high interest.
Optimized IR synchrotron beamline design.
Moreno, Thierry
2015-09-01
Synchrotron infrared beamlines are powerful tools on which to perform spectroscopy on microscopic length scales but require working with large bending-magnet source apertures in order to provide intense photon beams to the experiments. Many infrared beamlines use a single toroidal-shaped mirror to focus the source emission which generates, for large apertures, beams with significant geometrical aberrations resulting from the shape of the source and the beamline optics. In this paper, an optical layout optimized for synchrotron infrared beamlines, that removes almost totally the geometrical aberrations of the source, is presented and analyzed. This layout is already operational on the IR beamline of the Brazilian synchrotron. An infrared beamline design based on a SOLEIL bending-magnet source is given as an example, which could be useful for future IR beamline improvements at this facility.
Design sensitivity analysis and optimization tool (DSO) for sizing design applications
NASA Technical Reports Server (NTRS)
Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa
1992-01-01
The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.
Handling Qualities Optimization for Rotorcraft Conceptual Design
NASA Technical Reports Server (NTRS)
Lawrence, Ben; Theodore, Colin R.; Berger, Tom
2016-01-01
Over the past decade, NASA, under a succession of rotary-wing programs has been moving towards coupling multiple discipline analyses in a rigorous consistent manner to evaluate rotorcraft conceptual designs. Handling qualities is one of the component analyses to be included in a future NASA Multidisciplinary Analysis and Optimization framework for conceptual design of VTOL aircraft. Similarly, the future vision for the capability of the Concept Design and Assessment Technology Area (CD&A-TA) of the U.S Army Aviation Development Directorate also includes a handling qualities component. SIMPLI-FLYD is a tool jointly developed by NASA and the U.S. Army to perform modeling and analysis for the assessment of flight dynamics and control aspects of the handling qualities of rotorcraft conceptual designs. An exploration of handling qualities analysis has been carried out using SIMPLI-FLYD in illustrative scenarios of a tiltrotor in forward flight and single-main rotor helicopter at hover. Using SIMPLI-FLYD and the conceptual design tool NDARC integrated into a single process, the effects of variations of design parameters such as tail or rotor size were evaluated in the form of margins to fixed- and rotary-wing handling qualities metrics as well as the vehicle empty weight. The handling qualities design margins are shown to vary across the flight envelope due to both changing flight dynamic and control characteristics and changing handling qualities specification requirements. The current SIMPLI-FLYD capability and future developments are discussed in the context of an overall rotorcraft conceptual design process.
McCorkle, Douglas S.; Bryden, Kenneth M.
2011-01-01
Several recent reports and workshops have identified integrated computational engineering as an emerging technology with the potential to transform engineering design. The goal is to integrate geometric models, analyses, simulations, optimization and decision-making tools, and all other aspects of the engineering process into a shared, interactive computer-generated environment that facilitates multidisciplinary and collaborative engineering. While integrated computational engineering environments can be constructed from scratch with high-level programming languages, the complexity of these proposed environments makes this type of approach prohibitively slow and expensive. Rather, a high-level software framework is needed to provide the user with the capability to construct an application in an intuitive manner using existing models and engineering tools with minimal programming. In this paper, we present an exploratory open source software framework that can be used to integrate the geometric models, computational fluid dynamics (CFD), and optimization tools needed for shape optimization of complex systems. This framework is demonstrated using the multiphase flow analysis of a complete coal transport system for an 800 MW pulverized coal power station. The framework uses engineering objects and three-dimensional visualization to enable the user to interactively design and optimize the performance of the coal transport system.
A duality framework for stochastic optimal control of complex systems
Malikopoulos, Andreas A.
2016-01-01
In this study, we address the problem of minimizing the long-run expected average cost of a complex system consisting of interactive subsystems. We formulate a multiobjective optimization problem of the one-stage expected costs of the subsystems and provide a duality framework to prove that the control policy yielding the Pareto optimal solution minimizes the average cost criterion of the system. We provide the conditions of existence and a geometric interpretation of the solution. For practical situations having constraints consistent with those studied here, our results imply that the Pareto control policy may be of value when we seek to derivemore » online the optimal control policy in complex systems.« less
Program Aids Analysis And Optimization Of Design
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1994-01-01
NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.
Integrated multidisciplinary design optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.
Multi-Disciplinary Design Optimization Using WAVE
NASA Technical Reports Server (NTRS)
Irwin, Keith
2000-01-01
develop an associative control structure (framework) in the UG WAVE environment enabling multi-disciplinary design of turbine propulsion systems. The capabilities of WAVE were evaluated to assess its use as a rapid optimization and productivity tool. This project also identified future WAVE product enhancements that will make the tool still more beneficial for product development.
Optimal control concepts in design sensitivity analysis
NASA Technical Reports Server (NTRS)
Belegundu, Ashok D.
1987-01-01
A close link is established between open loop optimal control theory and optimal design by noting certain similarities in the gradient calculations. The resulting benefits include a unified approach, together with physical insights in design sensitivity analysis, and an efficient approach for simultaneous optimal control and design. Both matrix displacement and matrix force methods are considered, and results are presented for dynamic systems, structures, and elasticity problems.
An OER Architecture Framework: Needs and Design
ERIC Educational Resources Information Center
Khanna, Pankaj; Basak, P. C.
2013-01-01
This paper describes an open educational resources (OER) architecture framework that would bring significant improvements in a well-structured and systematic way to the educational practices of distance education institutions of India. The OER architecture framework is articulated with six dimensions: pedagogical, technological, managerial,…
Optimizing investment fund allocation using vehicle routing problem framework
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-07-01
The objective of investment is to maximize total returns or minimize total risks. To determine the optimum order of investment, vehicle routing problem method is used. The method which is widely used in the field of resource distribution shares almost similar characteristics with the problem of investment fund allocation. In this paper we describe and elucidate the concept of using vehicle routing problem framework in optimizing the allocation of investment fund. To better illustrate these similarities, sectorial data from FTSE Bursa Malaysia is used. Results show that different values of utility for risk-averse investors generate the same investment routes.
Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...
Scalar and Multivariate Approaches for Optimal Network Design in Antarctica
NASA Astrophysics Data System (ADS)
Hryniw, Natalia
Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.
A hydroeconomic modeling framework for optimal integrated management of forest and water
NASA Astrophysics Data System (ADS)
Garcia-Prats, Alberto; del Campo, Antonio D.; Pulido-Velazquez, Manuel
2016-10-01
Forests play a determinant role in the hydrologic cycle, with water being the most important ecosystem service they provide in semiarid regions. However, this contribution is usually neither quantified nor explicitly valued. The aim of this study is to develop a novel hydroeconomic modeling framework for assessing and designing the optimal integrated forest and water management for forested catchments. The optimization model explicitly integrates changes in water yield in the stands (increase in groundwater recharge) induced by forest management and the value of the additional water provided to the system. The model determines the optimal schedule of silvicultural interventions in the stands of the catchment in order to maximize the total net benefit in the system. Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs according to stand density and canopy cover were modeled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. In order to illustrate the presented modeling framework, a case study was carried out in a planted pine forest (Pinus halepensis Mill.) located in south-western Valencia province (Spain). The optimized scenario increased groundwater recharge. This novel modeling framework can be used in the design of a "payment for environmental services" scheme in which water beneficiaries could contribute to fund and promote efficient forest management operations.
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Optimal experimental design strategies for detecting hormesis.
Dette, Holger; Pepelyshev, Andrey; Wong, Weng Kee
2011-12-01
Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations.
Optimal design of isotope labeling experiments.
Yang, Hong; Mandy, Dominic E; Libourel, Igor G L
2014-01-01
Stable isotope labeling experiments (ILE) constitute a powerful methodology for estimating metabolic fluxes. An optimal label design for such an experiment is necessary to maximize the precision with which fluxes can be determined. But often, precision gained in the determination of one flux comes at the expense of the precision of other fluxes, and an appropriate label design therefore foremost depends on the question the investigator wants to address. One could liken ILE to shadows that metabolism casts on products. Optimal label design is the placement of the lamp; creating clear shadows for some parts of metabolism and obscuring others.An optimal isotope label design is influenced by: (1) the network structure; (2) the true flux values; (3) the available label measurements; and, (4) commercially available substrates. The first two aspects are dictated by nature and constrain any optimal design. The second two aspects are suitable design parameters. To create an optimal label design, an explicit optimization criterion needs to be formulated. This usually is a property of the flux covariance matrix, which can be augmented by weighting label substrate cost. An optimal design is found by using such a criterion as an objective function for an optimizer. This chapter uses a simple elementary metabolite units (EMU) representation of the TCA cycle to illustrate the process of experimental design of isotope labeled substrates.
Yang, Guoxiang; Best, Elly P H
2015-09-15
Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions.
Integrated multidisciplinary design optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The optimization formulation is described in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.
Novel integrated design framework for radio frequency quadrupoles
NASA Astrophysics Data System (ADS)
Jolly, Simon; Easton, Matthew; Lawrie, Scott; Letchford, Alan; Pozimski, Jürgen; Savage, Peter
2014-01-01
A novel design framework for Radio Frequency Quadrupoles (RFQs), developed as part of the design of the FETS RFQ, is presented. This framework integrates several previously disparate steps in the design of RFQs, including the beam dynamics design, mechanical design, electromagnetic, thermal and mechanical modelling and beam dynamics simulations. Each stage of the design process is described in detail, including the various software options and reasons for the final software suite selected. Results are given for each of these steps, describing how each stage affects the overall design process, with an emphasis on the resulting design choices for the FETS RFQ.
Building a Framework for Engineering Design Experiences in High School
ERIC Educational Resources Information Center
Denson, Cameron D.; Lammi, Matthew
2014-01-01
In this article, Denson and Lammi put forth a conceptual framework that will help promote the successful infusion of engineering design experiences into high school settings. When considering a conceptual framework of engineering design in high school settings, it is important to consider the complex issue at hand. For the purposes of this…
A Design Framework for Online Teacher Professional Development Communities
ERIC Educational Resources Information Center
Liu, Katrina Yan
2012-01-01
This paper provides a design framework for building online teacher professional development communities for preservice and inservice teachers. The framework is based on a comprehensive literature review on the latest technology and epistemology of online community and teacher professional development, comprising four major design factors and three…
Design optimization studies using COSMIC NASTRAN
NASA Technical Reports Server (NTRS)
Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.
1993-01-01
The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.
Optimization, an Important Stage of Engineering Design
ERIC Educational Resources Information Center
Kelley, Todd R.
2010-01-01
A number of leaders in technology education have indicated that a major difference between the technological design process and the engineering design process is analysis and optimization. The analysis stage of the engineering design process is when mathematical models and scientific principles are employed to help the designer predict design…
Optimal multiobjective design of digital filters using spiral optimization technique.
Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid
2013-01-01
The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use.
Optimal Multiobjective Design of Digital Filters Using Taguchi Optimization Technique
NASA Astrophysics Data System (ADS)
Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid
2014-01-01
The multiobjective design of digital filters using the powerful Taguchi optimization technique is considered in this paper. This relatively new optimization tool has been recently introduced to the field of engineering and is based on orthogonal arrays. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the Taguchi optimization technique produced filters that fulfill the desired characteristics and are of practical use.
A multi-fidelity framework for physics based rotor blade simulation and optimization
NASA Astrophysics Data System (ADS)
Collins, Kyle Brian
with lower fidelity models. This thesis documents the development of automated low and high fidelity physics based rotor simulation frameworks. The low fidelity framework uses a comprehensive code with simplified aerodynamics. The high fidelity model uses a parallel processor capable CFD/CSD methodology. Both low and high fidelity frameworks include an aeroacoustic simulation for prediction of noise. A synergistic process is developed that uses both the low and high fidelity frameworks together to build approximate models of important high fidelity metrics as functions of certain design variables. To test the process, a 4-bladed hingeless rotor model is used as a baseline. The design variables investigated include tip geometry and spanwise twist distribution. Approximation models are built for metrics related to rotor efficiency and vibration using the results from 60+ high fidelity (CFD/CSD) experiments and 400+ low fidelity experiments. Optimization using the approximation models found the Pareto Frontier anchor points, or the design having maximum rotor efficiency and the design having minimum vibration. Various Pareto generation methods are used to find designs on the frontier between these two anchor designs. When tested in the high fidelity framework, the Pareto anchor designs are shown to be very good designs when compared with other designs from the high fidelity database. This provides evidence that the process proposed has merit. Ultimately, this process can be utilized by industry rotor designers with their existing tools to bring high fidelity analysis into the preliminary design stage of rotors. In conclusion, the methods developed and documented in this thesis have made several novel contributions. First, an automated high fidelity CFD based forward flight simulation framework has been built for use in preliminary design optimization. The framework was built around an integrated, parallel processor capable CFD/CSD/AA process. Second, a novel method of
Machnes, S.; Sander, U.; Glaser, S. J.; Schulte-Herbrueggen, T.; Fouquieres, P. de; Gruslys, A.; Schirmer, S.
2011-08-15
For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.
A computational framework to empower probabilistic protein design
Fromer, Menachem; Yanover, Chen
2008-01-01
Motivation: The task of engineering a protein to perform a target biological function is known as protein design. A commonly used paradigm casts this functional design problem as a structural one, assuming a fixed backbone. In probabilistic protein design, positional amino acid probabilities are used to create a random library of sequences to be simultaneously screened for biological activity. Clearly, certain choices of probability distributions will be more successful in yielding functional sequences. However, since the number of sequences is exponential in protein length, computational optimization of the distribution is difficult. Results: In this paper, we develop a computational framework for probabilistic protein design following the structural paradigm. We formulate the distribution of sequences for a structure using the Boltzmann distribution over their free energies. The corresponding probabilistic graphical model is constructed, and we apply belief propagation (BP) to calculate marginal amino acid probabilities. We test this method on a large structural dataset and demonstrate the superiority of BP over previous methods. Nevertheless, since the results obtained by BP are far from optimal, we thoroughly assess the paradigm using high-quality experimental data. We demonstrate that, for small scale sub-problems, BP attains identical results to those produced by exact inference on the paradigmatic model. However, quantitative analysis shows that the distributions predicted significantly differ from the experimental data. These findings, along with the excellent performance we observed using BP on the smaller problems, suggest potential shortcomings of the paradigm. We conclude with a discussion of how it may be improved in the future. Contact: fromer@cs.huji.ac.il PMID:18586717
Optimal design of structures with buckling constraints.
NASA Technical Reports Server (NTRS)
Kiusalaas, J.
1973-01-01
The paper presents an iterative, finite element method for minimum weight design of structures with respect to buckling constraints. The redesign equation is derived from the optimality criterion, as opposed to a numerical search procedure, and can handle problems that are characterized by the existence of two fundamental buckling modes at the optimal design. Application of the method is illustrated by beam and orthogonal frame design problems.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Interior Design Education within a Human Ecological Framework
ERIC Educational Resources Information Center
Kaup, Migette L.; Anderson, Barbara G.; Honey, Peggy
2007-01-01
An education based in human ecology can greatly benefit interior designers as they work to understand and improve the human condition. Design programs housed in colleges focusing on human ecology can improve the interior design profession by taking advantage of their home base and emphasizing the human ecological framework in the design curricula.…
Optimization design of electromagnetic shielding composites
NASA Astrophysics Data System (ADS)
Qu, Zhaoming; Wang, Qingguo; Qin, Siliang; Hu, Xiaofeng
2013-03-01
The effective electromagnetic parameters physical model of composites and prediction formulas of composites' shielding effectiveness and reflectivity were derived based on micromechanics, variational principle and electromagnetic wave transmission theory. The multi-objective optimization design of multilayer composites was carried out using genetic algorithm. The optimized results indicate that material parameter proportioning of biggest absorption ability can be acquired under the condition of the minimum shielding effectiveness can be satisfied in certain frequency band. The validity of optimization design model was verified and the scheme has certain theoretical value and directive significance to the design of high efficiency shielding composites.
Topology optimization design of space rectangular mirror
NASA Astrophysics Data System (ADS)
Qu, Yanjun; Wang, Wei; Liu, Bei; Li, Xupeng
2016-10-01
A conceptual lightweight rectangular mirror is designed based on the theory of topology optimization and the specific structure size is determined through sensitivity analysis and size optimization in this paper. Under the load condition of gravity along the optical axis, compared with the mirrors designed by traditional method using finite element analysis method, the performance of the topology optimization reflectors supported by peripheral six points are superior in lightweight ratio, structure stiffness and the reflective surface accuracy. This suggests that the lightweight method in this paper is effective and has potential value for the design of rectangular reflector.
Optimization methods applied to hybrid vehicle design
NASA Technical Reports Server (NTRS)
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Design Optimization Programmable Calculators versus Campus Computers.
ERIC Educational Resources Information Center
Savage, Michael
1982-01-01
A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)
Interaction prediction optimization in multidisciplinary design optimization problems.
Meng, Debiao; Zhang, Xiaoling; Huang, Hong-Zhong; Wang, Zhonglai; Xu, Huanwei
2014-01-01
The distributed strategy of Collaborative Optimization (CO) is suitable for large-scale engineering systems. However, it is hard for CO to converge when there is a high level coupled dimension. Furthermore, the discipline objectives cannot be considered in each discipline optimization problem. In this paper, one large-scale systems control strategy, the interaction prediction method (IPM), is introduced to enhance CO. IPM is utilized for controlling subsystems and coordinating the produce process in large-scale systems originally. We combine the strategy of IPM with CO and propose the Interaction Prediction Optimization (IPO) method to solve MDO problems. As a hierarchical strategy, there are a system level and a subsystem level in IPO. The interaction design variables (including shared design variables and linking design variables) are operated at the system level and assigned to the subsystem level as design parameters. Each discipline objective is considered and optimized at the subsystem level simultaneously. The values of design variables are transported between system level and subsystem level. The compatibility constraints are replaced with the enhanced compatibility constraints to reduce the dimension of design variables in compatibility constraints. Two examples are presented to show the potential application of IPO for MDO.
Turbomachinery Airfoil Design Optimization Using Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan (Technical Monitor)
2002-01-01
An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine and compared to earlier methods. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.
Turbomachinery Airfoil Design Optimization Using Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.
Flow simulation and shape optimization for aircraft design
NASA Astrophysics Data System (ADS)
Kroll, Norbert; Gauger, Nicolas R.; Brezillon, Joel; Dwight, Richard; Fazzolari, Antonio; Vollmer, Daniel; Becker, Klaus; Barnewitz, Holger; Schulz, Volker; Hazra, Subhendu
2007-06-01
Within the framework of the German aerospace research program, the CFD project MEGADESIGN was initiated. The main goal of the project is the development of efficient numerical methods for shape design and optimization. In order to meet the requirements of industrial implementations a co-operative effort has been set up which involves the German aircraft industry, the DLR, several universities and some small enterprises specialized in numerical optimization. This paper outlines the planned activities within MEGADESIGN, the status at the beginning of the project and it presents some early results achieved in the project.
Incorporating water quality responses into the framework of best management practices optimization
NASA Astrophysics Data System (ADS)
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2016-10-01
Determining cost-effective configurations of best management practices (BMPs) is a notably complex problem, especially for large-scale watersheds. In this paper, a Markov-based simulator that has been developed to quantify water quality responses is described, and a new framework is also proposed for the optimal design of BMPs by integrating the Markov approach, a watershed model, and an evolutionary algorithm. This new framework was then tested in a typical watershed, the Three Georges Reservoir Region in China. The results obtained from this application indicate the integration of water quality responses is vital for the optimal design of BMPs, especially for the downstream areas of the targeted river assessment section. The Markov-based algorithm had a computational advantage over traditional algorithm and this new algorithm offers the prospect of providing more cost-effective medium-cost solutions. The relative impacts of upstream BMPs were also highlighted in protecting water quality at multiple river assessment sections. This new algorithm can easily be extended to any other watershed to aid decision managers in the optimal design of BMPs at the watershed scale.
Multidisciplinary design optimization using response surface analysis
NASA Technical Reports Server (NTRS)
Unal, Resit
1992-01-01
Aerospace conceptual vehicle design is a complex process which involves multidisciplinary studies of configuration and technology options considering many parameters at many values. NASA Langley's Vehicle Analysis Branch (VAB) has detailed computerized analysis capabilities in most of the key disciplines required by advanced vehicle design. Given a configuration, the capability exists to quickly determine its performance and lifecycle cost. The next step in vehicle design is to determine the best settings of design parameters that optimize the performance characteristics. Typical approach to design optimization is experience based, trial and error variation of many parameters one at a time where possible combinations usually number in the thousands. However, this approach can either lead to a very long and expensive design process or to a premature termination of the design process due to budget and/or schedule pressures. Furthermore, one variable at a time approach can not account for the interactions that occur among parts of systems and among disciplines. As a result, vehicle design may be far from optimal. Advanced multidisciplinary design optimization (MDO) methods are needed to direct the search in an efficient and intelligent manner in order to drastically reduce the number of candidate designs to be evaluated. The payoffs in terms of enhanced performance and reduced cost are significant. A literature review yields two such advanced MDO methods used in aerospace design optimization; Taguchi methods and response surface methods. Taguchi methods provide a systematic and efficient method for design optimization for performance and cost. However, response surface method (RSM) leads to a better, more accurate exploration of the parameter space and to estimated optimum conditions with a small expenditure on experimental data. These two methods are described.
Geometric methods for optimal sensor design.
Belabbas, M-A
2016-01-01
The Kalman-Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design.
Geometric methods for optimal sensor design
Belabbas, M.-A.
2016-01-01
The Kalman–Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design. PMID:26997885
Virtual Reality Hypermedia Design Frameworks for Science Instruction.
ERIC Educational Resources Information Center
Maule, R. William; Oh, Byron; Check, Rosa
This paper reports on a study that conceptualizes a research framework to aid software design and development for virtual reality (VR) computer applications for instruction in the sciences. The framework provides methodologies for the processing, collection, examination, classification, and presentation of multimedia information within hyperlinked…
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared
A Design Framework for Syllabus Generator
ERIC Educational Resources Information Center
Abdous, M'hammed; He, Wu
2008-01-01
A well-designed syllabus provides students with a roadmap for an engaging and successful learning experience, whereas a poorly designed syllabus impedes communication between faculty and students, increases student anxiety and potential complaints, and reduces overall teaching effectiveness. In an effort to facilitate, streamline, and improve…
An expert system for optimal gear design
Lin, K.C.
1988-01-01
By properly developing the mathematical model, numerical optimization can be used to seek the best solution for a given set of geometric constraints. The process of determining the non-geometric design variables is automated by using symbolic computation. This gear-design system is built according to the AGMA standards and a survey of gear-design experts. The recommendations of gear designers and the information provided by AGMA standards are integrated into knowledge bases and data bases. By providing fast information retrieval and design guidelines, this expert system greatly streamlines the spur gear design process. The concept of separating the design space into geometric and non-geometric variables can also be applied to the design process for general mechanical elements. The expert-system techniques is used to simulate a human designer to optimize the process of determining non-geometric parameters, and the numerical optimization is used to identify for the best geometric solution. The combination of the expert-system technique with numerical optimization essentially eliminates the deficiencies of both methods and thus provides a better way of modeling the engineering design process.
Theoretical Foundation of Copernicus: A Unified System for Trajectory Design and Optimization
NASA Technical Reports Server (NTRS)
Ocampo, Cesar; Senent, Juan S.; Williams, Jacob
2010-01-01
The fundamental methods are described for the general spacecraft trajectory design and optimization software system called Copernicus. The methods rely on a unified framework that is used to model, design, and optimize spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The trajectory model, with its associated equations of motion and maneuver models, are discussed.
Integrating Hybrid Life Cycle Assessment with Multiobjective Optimization: A Modeling Framework.
Yue, Dajun; Pandya, Shyama; You, Fengqi
2016-02-02
By combining life cycle assessment (LCA) with multiobjective optimization (MOO), the life cycle optimization (LCO) framework holds the promise not only to evaluate the environmental impacts for a given product but also to compare different alternatives and identify both ecologically and economically better decisions. Despite the recent methodological developments in LCA, most LCO applications are developed upon process-based LCA, which results in system boundary truncation and underestimation of the true impact. In this study, we propose a comprehensive LCO framework that seamlessly integrates MOO with integrated hybrid LCA. It quantifies both direct and indirect environmental impacts and incorporates them into the decision making process in addition to the more traditional economic criteria. The proposed LCO framework is demonstrated through an application on sustainable design of a potential bioethanol supply chain in the UK. Results indicate that the proposed hybrid LCO framework identifies a considerable amount of indirect greenhouse gas emissions (up to 58.4%) that are essentially ignored in process-based LCO. Among the biomass feedstock options considered, using woody biomass for bioethanol production would be the most preferable choice from a climate perspective, while the mixed use of wheat and wheat straw as feedstocks would be the most cost-effective one.
Toward a More Flexible Web-Based Framework for Multidisciplinary Design
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Salas, A. O.
1999-01-01
In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary design, is defined as a hardware-software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, monitoring, controlling, and displaying the design process. The objective of this research is to explore how Web technology can improve these areas of weakness and lead toward a more flexible framework. This article describes a Web-based system that optimizes and controls the execution sequence of design processes in addition to monitoring the project status and displaying the design results.
Optimal Experimental Design for Model Discrimination
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983
Optimal design of reverse osmosis module networks
Maskan, F.; Wiley, D.E.; Johnston, L.P.M.; Clements, D.J.
2000-05-01
The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found that optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.
Optimal Design of Tests with Dichotomous and Polytomous Items.
ERIC Educational Resources Information Center
Berger, Martijn P. F.
1998-01-01
Reviews some results on optimal design of tests with items of dichotomous and polytomous response formats and offers rules and guidelines for optimal test assembly. Discusses the problem of optimal test design for two optimality criteria. (Author/SLD)
A design framework for exploratory geovisualization in epidemiology
Robinson, Anthony C.
2009-01-01
This paper presents a design framework for geographic visualization based on iterative evaluations of a toolkit designed to support cancer epidemiology. The Exploratory Spatio-Temporal Analysis Toolkit (ESTAT), is intended to support visual exploration through multivariate health data. Its purpose is to provide epidemiologists with the ability to generate new hypotheses or further refine those they may already have. Through an iterative user-centered design process, ESTAT has been evaluated by epidemiologists at the National Cancer Institute (NCI). Results of these evaluations are discussed, and a design framework based on evaluation evidence is presented. The framework provides specific recommendations and considerations for the design and development of a geovisualization toolkit for epidemiology. Its basic structure provides a model for future design and evaluation efforts in information visualization. PMID:20390052
NASA Astrophysics Data System (ADS)
Khalid, Adeel Syed
Rotorcraft's evolution has lagged behind that of fixed-wing aircraft. One of the reasons for this gap is the absence of a formal methodology to accomplish a complete conceptual and preliminary design. Traditional rotorcraft methodologies are not only time consuming and expensive but also yield sub-optimal designs. Rotorcraft design is an excellent example of a multidisciplinary complex environment where several interdependent disciplines are involved. A formal framework is developed and implemented in this research for preliminary rotorcraft design using IPPD methodology. The design methodology consists of the product and process development cycles. In the product development loop, all the technical aspects of design are considered including the vehicle engineering, dynamic analysis, stability and control, aerodynamic performance, propulsion, transmission design, weight and balance, noise analysis and economic analysis. The design loop starts with a detailed analysis of requirements. A baseline is selected and upgrade targets are identified depending on the mission requirements. An Overall Evaluation Criterion (OEC) is developed that is used to measure the goodness of the design or to compare the design with competitors. The requirements analysis and baseline upgrade targets lead to the initial sizing and performance estimation of the new design. The digital information is then passed to disciplinary experts. This is where the detailed disciplinary analyses are performed. Information is transferred from one discipline to another as the design loop is iterated. To coordinate all the disciplines in the product development cycle, Multidisciplinary Design Optimization (MDO) techniques e.g. All At Once (AAO) and Collaborative Optimization (CO) are suggested. The methodology is implemented on a Light Turbine Training Helicopter (LTTH) design. Detailed disciplinary analyses are integrated through a common platform for efficient and centralized transfer of design
Torsional ultrasonic transducer computational design optimization.
Melchor, J; Rus, G
2014-09-01
A torsional piezoelectric ultrasonic sensor design is proposed in this paper and computationally tested and optimized to measure shear stiffness properties of soft tissue. These are correlated with a number of pathologies like tumors, hepatic lesions and others. The reason is that, whereas compressibility is predominantly governed by the fluid phase of the tissue, the shear stiffness is dependent on the stroma micro-architecture, which is directly affected by those pathologies. However, diagnostic tools to quantify them are currently not well developed. The first contribution is a new typology of design adapted to quasifluids. A second contribution is the procedure for design optimization, for which an analytical estimate of the Robust Probability Of Detection, called RPOD, is presented for use as optimality criteria. The RPOD is formulated probabilistically to maximize the probability of detecting the least possible pathology while minimizing the effect of noise. The resulting optimal transducer has a resonance frequency of 28 kHz.
A Framework for the Design of Service Systems
NASA Astrophysics Data System (ADS)
Tan, Yao-Hua; Hofman, Wout; Gordijn, Jaap; Hulstijn, Joris
We propose a framework for the design and implementation of service systems, especially to design controls for long-term sustainable value co-creation. The framework is based on the software support tool e3-control. To illustrate the framework we use a large-scale case study, the Beer Living Lab, for simplification of customs procedures in international trade. The BeerLL shows how value co-creation can be achieved by reduction of administrative burden in international beer export due to electronic customs. Participants in the BeerLL are Heineken, IBM and Dutch Tax & Customs.
Vehicle systems design optimization study
NASA Technical Reports Server (NTRS)
Gilmour, J. L.
1980-01-01
The optimum vehicle configuration and component locations are determined for an electric drive vehicle based on using the basic structure of a current production subcompact vehicle. The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current internal combustion engine vehicles. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages, one at front under the hood and a second at the rear under the cargo area, in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passenger and cargo space for a given size vehicle.
Aircraft design optimization with multidisciplinary performance criteria
NASA Technical Reports Server (NTRS)
Morris, Stephen; Kroo, Ilan
1989-01-01
The method described here for aircraft design optimization with dynamic response considerations provides an inexpensive means of integrating dynamics into aircraft preliminary design. By defining a dynamic performance index that can be added to a conventional objective function, a designer can investigate the trade-off between performance and handling (as measured by the vehicle's unforced response). The procedure is formulated to permit the use of control system gains as design variables, but does not require full-state feedback. The examples discussed here show how such an approach can lead to significant improvements in the design as compared with the more common sequential design of system and control law.
A design optimization methodology for Li+ batteries
NASA Astrophysics Data System (ADS)
Golmon, Stephanie; Maute, Kurt; Dunn, Martin L.
2014-05-01
Design optimization for functionally graded battery electrodes is shown to improve the usable energy capacity of Li batteries predicted by computational simulations and numerically optimizing the electrode porosities and particle radii. A multi-scale battery model which accounts for nonlinear transient transport processes, electrochemical reactions, and mechanical deformations is used to predict the usable energy storage capacity of the battery over a range of discharge rates. A multi-objective formulation of the design problem is introduced to maximize the usable capacity over a range of discharge rates while limiting the mechanical stresses. The optimization problem is solved via a gradient based optimization. A LiMn2O4 cathode is simulated with a PEO-LiCF3SO3 electrolyte and both a Li Foil (half cell) and LiC6 anode. Studies were performed on both half and full cell configurations resulting in distinctly different optimal electrode designs. The numerical results show that the highest rate discharge drives the simulations and the optimal designs are dominated by Li+ transport rates. The results also suggest that spatially varying electrode porosities and active particle sizes provides an efficient approach to improve the power-to-energy density of Li+ batteries. For the half cell configuration, the optimal design improves the discharge capacity by 29% while for the full cell the discharge capacity was improved 61% relative to an initial design with a uniform electrode structure. Most of the improvement in capacity was due to the spatially varying porosity, with up to 5% of the gains attributed to the particle radii design variables.
Moore, Gregory L.; Maranas, Costas D.
2002-01-01
We present a systematic computational framework, eCodonOpt, for designing parental DNA sequences for directed evolution experiments through codon usage optimization. Given a set of homologous parental proteins to be recombined at the DNA level, the optimal DNA sequences encoding these proteins are sought for a given diversity objective. We find that the free energy of annealing between the recombining DNA sequences is a much better descriptor of the extent of crossover formation than sequence identity. Three different diversity targets are investigated for the DNA shuffling protocol to showcase the utility of the eCodonOpt framework: (i) maximizing the average number of crossovers per recombined sequence; (ii) minimizing bias in family DNA shuffling so that each of the parental sequence pair contributes a similar number of crossovers to the library; and (iii) maximizing the relative frequency of crossovers in specific structural regions. Each one of these design challenges is formulated as a constrained optimization problem that utilizes 0–1 binary variables as on/off switches to model the selection of different codon choices for each residue position. Computational results suggest that many-fold improvements in the crossover frequency, location and specificity are possible, providing valuable insights for the engineering of directed evolution protocols. PMID:12034828
Multiobjective optimization techniques for structural design
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.
Achieving Equivalence: A Transnational Curriculum Design Framework
ERIC Educational Resources Information Center
Clarke, Angela; Johal, Terry; Sharp, Kristen; Quinn, Shayna
2016-01-01
Transnational education is now essential to university international development strategies. As a result, tertiary educators are expected to engage with the complexities of diverse cultural contexts, different delivery modes, and mixed student cohorts to design quality learning experiences for all. To support this transition we developed a…
Towards a Framework for Professional Curriculum Design
ERIC Educational Resources Information Center
Winch, Christopher
2015-01-01
Recent reviews of vocational qualifications in England have noted problems with their restricted nature. However, the underlying issue of how to conceptualise professional agency in curriculum design has not been properly addressed, either by the Richard or the Whitehead reviews. Drawing on comparative work in England and Europe it is argued that…
Optimizing Health Care Coalitions: Conceptual Frameworks and a Research Agenda.
Hupert, Nathaniel; Biala, Karen; Holland, Tara; Baehr, Avi; Hasan, Aisha; Harvey, Melissa
2015-12-01
The US health care system has maintained an objective of preparedness for natural or manmade catastrophic events as part of its larger charge to deliver health services for the American population. In 2002, support for hospital-based preparedness activities was bolstered by the creation of the National Bioterrorism Hospital Preparedness Program, now called the Hospital Preparedness Program, in the US Department of Health and Human Services. Since 2012, this program has promoted linking health care facilities into health care coalitions that build key preparedness and emergency response capabilities. Recognizing that well-functioning health care coalitions can have a positive impact on the health outcomes of the populations they serve, this article informs efforts to optimize health care coalition activity. We first review the landscape of health care coalitions in the United States. Then, using principles from supply chain management and high-reliability organization theory, we present 2 frameworks extending beyond the Office of the Assistant Secretary for Preparedness and Response's current guidance in a way that may help health care coalition leaders gain conceptual insight into how different enterprises achieve similar ends relevant to emergency response. We conclude with a proposed research agenda to advance understanding of how coalitions can contribute to the day-to-day functioning of health care systems and disaster preparedness.
Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs
NASA Technical Reports Server (NTRS)
Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.
1998-01-01
This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.
Optimization-based controller design for rotorcraft
NASA Technical Reports Server (NTRS)
Tsing, N.-K.; Fan, M. K. H.; Barlow, J.; Tits, A. L.; Tischler, M. B.
1993-01-01
An optimization-based methodology for linear control system design is outlined by considering the design of a controller for a UH-60 rotorcraft in hover. A wide range of design specifications is taken into account: internal stability, decoupling between longitudinal and lateral motions, handling qualities, and rejection of windgusts. These specifications are investigated while taking into account physical limitations in the swashplate displacements and rates of displacement. The methodology crucially relies on user-machine interaction for tradeoff exploration.
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Briggs, Jeffery L.
2008-01-01
The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.
Regression analysis as a design optimization tool
NASA Technical Reports Server (NTRS)
Perley, R.
1984-01-01
The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.
Lens design: optimization with Global Explorer
NASA Astrophysics Data System (ADS)
Isshiki, Masaki
2013-02-01
The optimization method damped least squares method (DLS) was almost completed late in the 1960s. DLS has been overwhelming in the local optimization technology. After that, various efforts were made to seek the global optimization. They came into the world after 1990 and the Global Explorer (GE) was one of them invented by the author to find plural solutions, each of which has the local minimum of the merit function. The robustness of the designed lens is also an important factor as well as the performance of the lens; both of these requirements are balanced in the process of optimization with GE2 (the second version of GE). An idea is also proposed to modify GE2 for aspherical lens systems. A design example is shown.
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les
1991-01-01
The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.
Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms
Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel; Graham, Richard L
2011-01-01
The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data, the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.
Learning Experience as Transaction: A Framework for Instructional Design
ERIC Educational Resources Information Center
Parrish, Patrick E.; Wilson, Brent G.; Dunlap, Joanna C.
2011-01-01
This article presents a framework for understanding learning experience as an object for instructional design--as an object for design as well as research and understanding. Compared to traditional behavioral objectives or discrete cognitive skills, the object of experience is more holistic, requiring simultaneous attention to cognition, behavior,…
A concept ideation framework for medical device design.
Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar
2015-06-01
Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems.
A Comprehensive Learning Event Design Using a Communication Framework
ERIC Educational Resources Information Center
Bower, Robert L.
1975-01-01
A learning event design for accountability uses a communications framework. The example given is a slide presentation on the invasion of Cuba during the Spanish-American War. Design components include introduction, objectives, media, involvement plans, motivation, bibliography, recapitulation, involvement sheets, evaluation, stimulus-response…
Design of optimized piezoelectric HDD-sliders
NASA Astrophysics Data System (ADS)
Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.
2010-04-01
As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.
Optimization and surgical design for applications in pediatric cardiology
NASA Astrophysics Data System (ADS)
Marsden, Alison; Bernstein, Adam; Taylor, Charles; Feinstein, Jeffrey
2007-11-01
The coupling of shape optimization to cardiovascular blood flow simulations has potential to improve the design of current surgeries and to eventually allow for optimization of surgical designs for individual patients. This is particularly true in pediatric cardiology, where geometries vary dramatically between patients, and unusual geometries can lead to unfavorable hemodynamic conditions. Interfacing shape optimization to three-dimensional, time-dependent fluid mechanics problems is particularly challenging because of the large computational cost and the difficulty in computing objective function gradients. In this work a derivative-free optimization algorithm is coupled to a three-dimensional Navier-Stokes solver that has been tailored for cardiovascular applications. The optimization code employs mesh adaptive direct search in conjunction with a Kriging surrogate. This framework is successfully demonstrated on several geometries representative of cardiovascular surgical applications. We will discuss issues of cost function choice for surgical applications, including energy loss and wall shear stress distribution. In particular, we will discuss the creation of new designs for the Fontan procedure, a surgery done in pediatric cardiology to treat single ventricle heart defects.
Design optimization for cost and quality: The robust design approach
NASA Technical Reports Server (NTRS)
Unal, Resit
1990-01-01
Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.
The optimal design of standard gearsets
NASA Technical Reports Server (NTRS)
Savage, M.; Coy, J. J.; Townsend, D. P.
1983-01-01
A design procedure for sizing standard involute spur gearsets is presented. The procedure is applied to find the optimal design for two examples - an external gear mesh with a ratio of 5:1 and an internal gear mesh with a ratio of 5:1. In the procedure, the gear mesh is designed to minimize the center distance for a given gear ratio, pressure angle, pinion torque, and allowable tooth strengths. From the methodology presented, a design space may be formulated for either external gear contact or for internal contact. The design space includes kinematics considerations of involute interference, tip fouling, and contact ratio. Also included are design constraints based on bending fatigue in the pinion fillet and Hertzian contact pressure in the full load region and at the gear tip where scoring is possible. This design space is two dimensional, giving the gear mesh center distance as a function of diametral pitch and the number of pinion teeth. The constraint equations were identified for kinematic interference, fillet bending fatigue, pitting fatigue, and scoring pressure, which define the optimal design space for a given gear design. The locus of equal size optimum designs was identified as the straight line through the origin which has the least slope in the design region.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Application of Optimal Designs to Item Calibration
Lu, Hung-Yi
2014-01-01
In computerized adaptive testing (CAT), examinees are presented with various sets of items chosen from a precalibrated item pool. Consequently, the attrition speed of the items is extremely fast, and replenishing the item pool is essential. Therefore, item calibration has become a crucial concern in maintaining item banks. In this study, a two-parameter logistic model is used. We applied optimal designs and adaptive sequential analysis to solve this item calibration problem. The results indicated that the proposed optimal designs are cost effective and time efficient. PMID:25188318
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Optimized Uncertainty Quantification Algorithm Within a Dynamic Event Tree Framework
J. W. Nielsen; Akira Tokuhiro; Robert Hiromoto
2014-06-01
Methods for developing Phenomenological Identification and Ranking Tables (PIRT) for nuclear power plants have been a useful tool in providing insight into modelling aspects that are important to safety. These methods have involved expert knowledge with regards to reactor plant transients and thermal-hydraulic codes to identify are of highest importance. Quantified PIRT provides for rigorous method for quantifying the phenomena that can have the greatest impact. The transients that are evaluated and the timing of those events are typically developed in collaboration with the Probabilistic Risk Analysis. Though quite effective in evaluating risk, traditional PRA methods lack the capability to evaluate complex dynamic systems where end states may vary as a function of transition time from physical state to physical state . Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. A limitation of DPRA is its potential for state or combinatorial explosion that grows as a function of the number of components; as well as, the sampling of transition times from state-to-state of the entire system. This paper presents a method for performing QPIRT within a dynamic event tree framework such that timing events which result in the highest probabilities of failure are captured and a QPIRT is performed simultaneously while performing a discrete dynamic event tree evaluation. The resulting simulation results in a formal QPIRT for each end state. The use of dynamic event trees results in state explosion as the number of possible component states increases. This paper utilizes a branch and bound algorithm to optimize the solution of the dynamic event trees. The paper summarizes the methods used to implement the branch-and-bound algorithm in solving the discrete dynamic event trees.
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
Optimization methods for alternative energy system design
NASA Astrophysics Data System (ADS)
Reinhardt, Michael Henry
An electric vehicle heating system and a solar thermal coffee dryer are presented as case studies in alternative energy system design optimization. Design optimization tools are compared using these case studies, including linear programming, integer programming, and fuzzy integer programming. Although most decision variables in the designs of alternative energy systems are generally discrete (e.g., numbers of photovoltaic modules, thermal panels, layers of glazing in windows), the literature shows that the optimization methods used historically for design utilize continuous decision variables. Integer programming, used to find the optimal investment in conservation measures as a function of life cycle cost of an electric vehicle heating system, is compared to linear programming, demonstrating the importance of accounting for the discrete nature of design variables. The electric vehicle study shows that conservation methods similar to those used in building design, that reduce the overall UA of a 22 ft. electric shuttle bus from 488 to 202 (Btu/hr-F), can eliminate the need for fossil fuel heating systems when operating in the northeast United States. Fuzzy integer programming is presented as a means of accounting for imprecise design constraints such as being environmentally friendly in the optimization process. The solar thermal coffee dryer study focuses on a deep-bed design using unglazed thermal collectors (UTC). Experimental data from parchment coffee drying are gathered, including drying constants and equilibrium moisture. In this case, fuzzy linear programming is presented as a means of optimizing experimental procedures to produce the most information under imprecise constraints. Graphical optimization is used to show that for every 1 m2 deep-bed dryer, of 0.4 m depth, a UTC array consisting of 5, 1.1 m 2 panels, and a photovoltaic array consisting of 1, 0.25 m 2 panels produces the most dry coffee per dollar invested in the system. In general this study
Branch target buffer design and optimization
NASA Technical Reports Server (NTRS)
Perleberg, Chris H.; Smith, Alan J.
1993-01-01
Consideration is given to two major issues in the design of branch target buffers (BTBs), with the goal of achieving maximum performance for a given number of bits allocated to the BTB design. The first issue is BTB management; the second is what information to keep in the BTB. A number of solutions to these problems are reviewed, and various optimizations in the design of BTBs are discussed. Design target miss ratios for BTBs are developed, making it possible to estimate the performance of BTBs for real workloads.
Evolutionary optimization methods for accelerator design
NASA Astrophysics Data System (ADS)
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained
Integrated structural-aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Kao, P. J.; Grossman, B.; Polen, D.; Sobieszczanski-Sobieski, J.
1988-01-01
This paper focuses on the processes of simultaneous aerodynamic and structural wing design as a prototype for design integration, with emphasis on the major difficulty associated with multidisciplinary design optimization processes, their enormous computational costs. Methods are presented for reducing this computational burden through the development of efficient methods for cross-sensitivity calculations and the implementation of approximate optimization procedures. Utilizing a modular sensitivity analysis approach, it is shown that the sensitivities can be computed without the expensive calculation of the derivatives of the aerodynamic influence coefficient matrix, and the derivatives of the structural flexibility matrix. The same process is used to efficiently evaluate the sensitivities of the wing divergence constraint, which should be particularly useful, not only in problems of complete integrated aircraft design, but also in aeroelastic tailoring applications.
Multidisciplinary Concurrent Design Optimization via the Internet
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand
2001-01-01
A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.
Shape optimization techniques for musical instrument design
NASA Astrophysics Data System (ADS)
Henrique, Luis; Antunes, Jose; Carvalho, Joao S.
2002-11-01
The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.
Design and Performance Frameworks for Constructing Problem-Solving Simulations
Stevens, Ron; Palacio-Cayetano, Joycelin
2003-01-01
Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks ideally would be guided less by the strengths/limitations of the presentation media and more by cognitive analyses detailing the goals of the tasks, the needs and abilities of students, and the resulting decision outcomes needed by different audiences. This article describes a problem-solving environment and associated theoretical framework for investigating how students select and use strategies as they solve complex science problems. A framework is first described for designing on-line problem spaces that highlights issues of content, scale, cognitive complexity, and constraints. While this framework was originally designed for medical education, it has proven robust and has been successfully applied to learning environments from elementary school through medical school. Next, a similar framework is detailed for collecting student performance and progress data that can provide evidence of students' strategic thinking and that could potentially be used to accelerate student progress. Finally, experimental validation data are presented that link strategy selection and use with other metrics of scientific reasoning and student achievement. PMID:14506505
Photovoltaic design optimization for terrestrial applications
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1978-01-01
As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.
Optimal radar waveform design for moving target
NASA Astrophysics Data System (ADS)
Zhu, Binqi; Gao, Yesheng; Wang, Kaizhi; Liu, Xingzhao
2016-07-01
An optimal radar waveform-design method is proposed to detect moving targets in the presence of clutter and noise. The clutter is split into moving and static parts. Radar-moving target/clutter models are introduced and combined with Neyman-Pearson criteria to design optimal waveforms. Results show that optimal waveform for a moving target is different with that for a static target. The combination of simple-frequency signals could produce maximum detectability based on different noise-power spectrum density situations. Simulations show that our algorithm greatly improves signal-to-clutter plus noise ratio of radar system. Therefore, this algorithm may be preferable for moving target detection when prior information on clutter and noise is available.
MDO can help resolve the designer's dilemma. [multidisciplinary design optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Tulinius, Jan R.
1991-01-01
Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.
Shape design sensitivity analysis and optimal design of structural systems
NASA Technical Reports Server (NTRS)
Choi, Kyung K.
1987-01-01
The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.
Design Optimization of Structural Health Monitoring Systems
Flynn, Eric B.
2014-03-06
Sensor networks drive decisions. Approach: Design networks to minimize the expected total cost (in a statistical sense, i.e. Bayes Risk) associated with making wrong decisions and with installing maintaining and running the sensor network itself. Search for optimal solutions using Monte-Carlo-Sampling-Adapted Genetic Algorithm. Applications include structural health monitoring and surveillance.
Embedding Educational Design Pattern Frameworks into Learning Management Systems
NASA Astrophysics Data System (ADS)
Derntl, Michael; Calvo, Rafael A.
Educational design patterns describe reusable solutions to the design of learning tasks and environments. While there are many projects producing patterns, there are few approaches dealing with supporting the instructor/user in instantiating and running those patterns on learning management systems (LMS). This paper aims to make a leap forward in this direction by presenting two different methods of embedding design pattern frameworks into LMS: (1) Supplying custom LMS components as part of the design patterns, and (2) Configuring existing LMS components based on design patterns. Descriptions of implementations and implications of these methods are provided.
Design Oriented Structural Modeling for Airplane Conceptual Design Optimization
NASA Technical Reports Server (NTRS)
Livne, Eli
1999-01-01
The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.
High Speed Civil Transport Design Using Collaborative Optimization and Approximate Models
NASA Technical Reports Server (NTRS)
Manning, Valerie Michelle
1999-01-01
The design of supersonic aircraft requires complex analysis in multiple disciplines, posing, a challenge for optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. It represents the most extensive application of the method to date.
NASA Technical Reports Server (NTRS)
Depenbrock, Brett T.; Balint, Tibor S.; Sheehy, Jeffrey A.
2014-01-01
Research and development organizations that push the innovation edge of technology frequently encounter challenges when attempting to identify an investment strategy and to accurately forecast the cost and schedule performance of selected projects. Fast moving and complex environments require managers to quickly analyze and diagnose the value of returns on investment versus allocated resources. Our Project Assessment Framework through Design (PAFTD) tool facilitates decision making for NASA senior leadership to enable more strategic and consistent technology development investment analysis, beginning at implementation and continuing through the project life cycle. The framework takes an integrated approach by leveraging design principles of useability, feasibility, and viability and aligns them with methods employed by NASA's Independent Program Assessment Office for project performance assessment. The need exists to periodically revisit the justification and prioritization of technology development investments as changes occur over project life cycles. The framework informs management rapidly and comprehensively about diagnosed internal and external root causes of project performance.
A clothing modeling framework for uniform and armor system design
NASA Astrophysics Data System (ADS)
Man, Xiaolin; Swan, Colby C.; Rahmatalla, Salam
2006-05-01
In the analysis and design of military uniforms and body armor systems it is helpful to quantify the effects of the clothing/armor system on a wearer's physical performance capabilities. Toward this end, a clothing modeling framework for quantifying the mechanical interactions between a given uniform or body armor system design and a specific wearer performing defined physical tasks is proposed. The modeling framework consists of three interacting modules: (1) a macroscale fabric mechanics/dynamics model; (2) a collision detection and contact correction module; and (3) a human motion module. In the proposed framework, the macroscopic fabric model is based on a rigorous large deformation continuum-degenerated shell theory representation. The collision and contact module enforces non-penetration constraints between the fabric and human body and computes the associated contact forces between the two. The human body is represented in the current framework, as an assemblage of overlapping ellipsoids that undergo rigid body motions consistent with human motions while performing actions such as walking, running, or jumping. The transient rigid body motions of each ellipsoidal body segment in time are determined using motion capture technology. The integrated modeling framework is then exercised to quantify the resistance that the clothing exerts on the wearer during the specific activities under consideration. Current results from the framework are presented and its intended applications are discussed along with some of the key challenges remaining in clothing system modeling.
Computational design and optimization of energy materials
NASA Astrophysics Data System (ADS)
Chan, Maria
The use of density functional theory (DFT) to understand and improve energy materials for diverse applications - including energy storage, thermal management, catalysis, and photovoltaics - is widespread. The further step of using high throughput DFT calculations to design materials and has led to an acceleration in materials discovery and development. Due to various limitations in DFT, including accuracy and computational cost, however, it is important to leverage effective models and, in some cases, experimental information to aid the design process. In this talk, I will discuss efforts in design and optimization of energy materials using a combination of effective models, DFT, machine learning, and experimental information.
A hybrid-algorithm-based parallel computing framework for optimal reservoir operation
NASA Astrophysics Data System (ADS)
Li, X.; Wei, J.; Li, T.; Wang, G.
2012-12-01
Up to date, various optimization models have been developed to offer optimal operating policies for reservoirs. Each optimization model has its own merits and limitations, and no general algorithm exists even today. At times, some optimization models have to be combined to obtain desired results. In this paper, we present a parallel computing framework to combine various optimization models in a different way compared to traditional serial computing. This framework consists of three functional processor types, that is, master processor, slave processor and transfer processor. The master processor has a full computation scheme that allocates optimization models to slave processors; slave processors perform allocated optimization models; the transfer processor is in charge of the solution communication among all slave processors. Based on these, the proposed framework can perform various optimization models in parallel. Because of the solution communication, the framework can also integrate the merits of involved optimization models while in iteration and the performance of each optimization model can therefore be improved. And more, it can be concluded the framework can effectively improve the solution quality and increase the solution speed by making full use of computing power of parallel computers.
Aircraft family design using enhanced collaborative optimization
NASA Astrophysics Data System (ADS)
Roth, Brian Douglas
Significant progress has been made toward the development of multidisciplinary design optimization (MDO) methods that are well-suited to practical large-scale design problems. However, opportunities exist for further progress. This thesis describes the development of enhanced collaborative optimization (ECO), a new decomposition-based MDO method. To support the development effort, the thesis offers a detailed comparison of two existing MDO methods: collaborative optimization (CO) and analytical target cascading (ATC). This aids in clarifying their function and capabilities, and it provides inspiration for the development of ECO. The ECO method offers several significant contributions. First, it enhances communication between disciplinary design teams while retaining the low-order coupling between them. Second, it provides disciplinary design teams with more authority over the design process. Third, it resolves several troubling computational inefficiencies that are associated with CO. As a result, ECO provides significant computational savings (relative to CO) for the test cases and practical design problems described in this thesis. New aircraft development projects seldom focus on a single set of mission requirements. Rather, a family of aircraft is designed, with each family member tailored to a different set of requirements. This thesis illustrates the application of decomposition-based MDO methods to aircraft family design. This represents a new application area, since MDO methods have traditionally been applied to multidisciplinary problems. ECO offers aircraft family design the same benefits that it affords to multidisciplinary design problems. Namely, it simplifies analysis integration, it provides a means to manage problem complexity, and it enables concurrent design of all family members. In support of aircraft family design, this thesis introduces a new wing structural model with sufficient fidelity to capture the tradeoffs associated with component
Multidisciplinary Design Optimization of A Highly Flexible Aeroservoelastic Wing
NASA Astrophysics Data System (ADS)
Haghighat, Sohrab
A multidisciplinary design optimization framework is developed that integrates control system design with aerostructural design for a highly-deformable wing. The objective of this framework is to surpass the existing aircraft endurance limits through the use of an active load alleviation system designed concurrently with the rest of the aircraft. The novelty of this work is two fold. First, a unified dynamics framework is developed to represent the full six-degree-of-freedom rigid-body along with the structural dynamics. It allows for an integrated control design to account for both manoeuvrability (flying quality) and aeroelasticity criteria simultaneously. Secondly, by synthesizing the aircraft control system along with the structural sizing and aerodynamic shape design, the final design has the potential to exploit synergies among the three disciplines and yield higher performing aircraft. A co-rotational structural framework featuring Euler--Bernoulli beam elements is developed to capture the wing's nonlinear deformations under the effect of aerodynamic and inertial loadings. In this work, a three-dimensional aerodynamic panel code, capable of calculating both steady and unsteady loadings is used. Two different control methods, a model predictive controller (MPC) and a 2-DOF mixed-norm robust controller, are considered in this work to control a highly flexible aircraft. Both control techniques offer unique advantages that make them promising for controlling a highly flexible aircraft. The control system works towards executing time-dependent manoeuvres along with performing gust/manoeuvre load alleviation. The developed framework is investigated for demonstration in two design cases: one in which the control system simply worked towards achieving or maintaining a target altitude, and another where the control system is also performing load alleviation. The use of the active load alleviation system results in a significant improvement in the aircraft performance
A Proposed Conceptual Framework for Curriculum Design in Physical Fitness.
ERIC Educational Resources Information Center
Miller, Peter V.; Beauchamp, Larry S.
A physical fitness curriculum, designed to provide cumulative benefits in a sequential pattern, is based upon a framework of a conceptual structure. The curriculum's ultimate goal is the achievement of greater physiological efficiency through a holistic approach that would strengthen circulatory-respiratory, mechanical, and neuro-muscular…
Sustainable Supply Chain Design by the P-Graph Framework
The present work proposes a computer-aided methodology for designing sustainable supply chains in terms of sustainability metrics by resorting to the P-graph framework. The methodology is an outcome of the collaboration between the Office of Research and Development (ORD) of the ...
TARDIS: An Automation Framework for JPL Mission Design and Navigation
NASA Technical Reports Server (NTRS)
Roundhill, Ian M.; Kelly, Richard M.
2014-01-01
Mission Design and Navigation at the Jet Propulsion Laboratory has implemented an automation framework tool to assist in orbit determination and maneuver design analysis. This paper describes the lessons learned from previous automation tools and how they have been implemented in this tool. In addition this tool has revealed challenges in software implementation, testing, and user education. This paper describes some of these challenges and invites others to share their experiences.
Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing
2015-01-01
Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840
Robust Design Optimization via Failure Domain Bounding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2007-01-01
This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.
A Multiobjective Optimization Framework for Stochastic Control of Complex Systems
Malikopoulos, Andreas; Maroulas, Vasileios; Xiong, Professor Jie
2015-01-01
This paper addresses the problem of minimizing the long-run expected average cost of a complex system consisting of subsystems that interact with each other and the environment. We treat the stochastic control problem as a multiobjective optimization problem of the one-stage expected costs of the subsystems, and we show that the control policy yielding the Pareto optimal solution is an optimal control policy that minimizes the average cost criterion for the entire system. For practical situations with constraints consistent to those we study here, our results imply that the Pareto control policy may be of value in deriving online an optimal control policy in complex systems.
Design Methods and Optimization for Morphing Aircraft
NASA Technical Reports Server (NTRS)
Crossley, William A.
2005-01-01
This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.
Optimal design of a space power system
NASA Technical Reports Server (NTRS)
Chun, Young W.; Braun, James F.
1990-01-01
The aerospace industry, like many other industries, regularly applies optimization techniques to develop designs which reduce cost, maximize performance, and minimize weight. The desire to minimize weight is of particular importance in space-related products since the costs of launch are directly related to payload weight, and launch vehicle capabilities often limit the allowable weight of a component or system. With these concerns in mind, this paper presents the optimization of a space-based power generation system for minimum mass. The goal of this work is to demonstrate the use of optimization techniques on a realistic and practical engineering system. The power system described uses thermoelectric devices to convert heat into electricity. The heat source for the system is a nuclear reactor. Waste heat is rejected from the system to space by a radiator.
Generalized mathematical models in design optimization
NASA Technical Reports Server (NTRS)
Papalambros, Panos Y.; Rao, J. R. Jagannatha
1989-01-01
The theory of optimality conditions of extremal problems can be extended to problems continuously deformed by an input vector. The connection between the sensitivity, well-posedness, stability and approximation of optimization problems is steadily emerging. The authors believe that the important realization here is that the underlying basis of all such work is still the study of point-to-set maps and of small perturbations, yet what has been identified previously as being just related to solution procedures is now being extended to study modeling itself in its own right. Many important studies related to the theoretical issues of parametric programming and large deformation in nonlinear programming have been reported in the last few years, and the challenge now seems to be in devising effective computational tools for solving these generalized design optimization models.
Multiobjective optimization in integrated photonics design.
Gagnon, Denis; Dumont, Joey; Dubé, Louis J
2013-07-01
We propose the use of the parallel tabu search algorithm (PTS) to solve combinatorial inverse design problems in integrated photonics. To assess the potential of this algorithm, we consider the problem of beam shaping using a two-dimensional arrangement of dielectric scatterers. The performance of PTS is compared to one of the most widely used optimization algorithms in photonics design, the genetic algorithm (GA). We find that PTS can produce comparable or better solutions than the GA, while requiring less computation time and fewer adjustable parameters. For the coherent beam shaping problem as a case study, we demonstrate how PTS can tackle multiobjective optimization problems and represent a robust and efficient alternative to GA.
Initial data sampling in design optimization
NASA Astrophysics Data System (ADS)
Southall, Hugh L.; O'Donnell, Terry H.
2011-06-01
Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling properties. There are techniques which address these issues. We describe and use one technique in this paper.
Optimal robust motion controller design using multiobjective genetic algorithm.
Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor
2014-01-01
This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm-differential evolution.
Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm
Svečko, Rajko
2014-01-01
This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749
Optimized design for an electrothermal microactuator
NASA Astrophysics Data System (ADS)
Cǎlimǎnescu, Ioan; Stan, Liviu-Constantin; Popa, Viorica
2015-02-01
In micromechanical structures, electrothermal actuators are known to be capable of providing larger force and reasonable tip deflection compared to electrostatic ones. Many studies have been devoted to the analysis of the flexure actuators. One of the most popular electrothermal actuators is called `U-shaped' actuator. The device is composed of two suspended beams with variable cross sections joined at the free end, which constrains the tip to move in an arcing motion while current is passed through the actuator. The goal of this research is to determine via FEA the best fitted geometry of the microactuator (optimization input parameters) in order to render some of the of the output parameters such as thermal strain or total deformations to their maximum values. The software to generate the CAD geometry was SolidWorks 2010 and all the FEA analysis was conducted with Ansys 13 TM. The optimized model has smaller geometric values of the input parameters that is a more compact geometry; The maximum temperature reached a smaller value for the optimized model; The calculated heat flux is with 13% bigger for the optimized model; the same for Joule Heat (26%), Total deformation (1.2%) and Thermal Strain (8%). By simple optimizing the design the dimensions and the performance of the micro actuator resulted more compact and more efficient.
Design Optimization of Marine Reduction Gears.
1983-09-01
Approved by: A t/ 6 -A-,i Thesis Advisor Second Reader Chairman,De rtment or Mecanica Engineering I De&n of Science and Engineering 3...unconstrained problems. 1. Direct Methods Direct methods are popular constrained optimization algorithms. One well known direct method is the method of...various popular tooth forms and Appendix A contains a descriptive figure of gear tooth design variables. However, the following equations are a good
Optimal Design of Compact Spur Gear Reductions
1992-09-01
stress, psi Lundberg and Palmgren (1952) developed a theory for the life and pressure angle, deg capacity of ball and roller bearings . This life model is... bearings (Lundberg and Paimgren, 1952). Lundberg and Palmgren determined that the scatter in the life of a bearing can be modeled with a two-parameter...optimal design of compact spur gear reductions includes the Vf unit gradient in the feasible direction selection of bearing and shaft proportions in
Computational Methods for Design, Control and Optimization
2007-10-01
34scenario" that applies to channel flows ( Poiseuille flows , Couette flow ) and pipe flows . Over the past 75 years many complex "transition theories" have... Simulation of Turbulent Flows , Springer Verlag, 2005. Additional Publications Supported by this Grant 1. J. Borggaard and T. Iliescu, Approximate Deconvolution...rigorous analysis of design algorithms that combine numerical simulation codes, approximate sensitivity calculations and optimization codes. The fundamental
Database Design and Management in Engineering Optimization.
1988-02-01
for 4 Steekanta Murthy, T., Shyy, Y.-K. and Arora, J. S. MIDAS: educational and research purposes. It has considerably Management of Information for...an education in the particular field of ,-". expertise. ..-. *, The types of information to be retained and presented depend on the user of the system...191 . ,. 110 Though the design of MIDAS is directly influenced by Obl- SPOC qUery-bioek the current structural optimization applications, it possesses
Simulation design for microalgal protein optimization
Imamoglu, Esra
2015-01-01
A method for designing the operating parameters (surface light intensity, operating temperature and agitation rate) was proposed for microalgal protein production. Furthermore, quadratic model was established and validated (R2 > 0.90) with experimental data. It was recorded that temperature and agitation rate were slightly interdependent. The microalgal protein performance could be estimated using the simulated experimental setup and procedure developed in this study. The results also showed a holistic approach for opening a new avenue on simulation design for microalgal protein optimization. PMID:26418695
Geometric constraints for shape and topology optimization in architectural design
NASA Astrophysics Data System (ADS)
Dapogny, Charles; Faure, Alexis; Michailidis, Georgios; Allaire, Grégoire; Couvelas, Agnes; Estevez, Rafael
2017-02-01
This work proposes a shape and topology optimization framework oriented towards conceptual architectural design. A particular emphasis is put on the possibility for the user to interfere on the optimization process by supplying information about his personal taste. More precisely, we formulate three novel constraints on the geometry of shapes; while the first two are mainly related to aesthetics, the third one may also be used to handle several fabrication issues that are of special interest in the device of civil structures. The common mathematical ingredient to all three models is the signed distance function to a domain, and its sensitivity analysis with respect to perturbations of this domain; in the present work, this material is extended to the case where the ambient space is equipped with an anisotropic metric tensor. Numerical examples are discussed in two and three space dimensions.
A framework for the design of ambulance sirens.
Catchpole, K; McKeown, D
2007-08-01
Ambulance sirens are essential for assisting the safe and rapid arrival of an ambulance at the scene of an emergency. In this study, the parameters upon which sirens may be designed were examined and a framework for emergency vehicle siren design was proposed. Validity for the framework was supported through acoustic measurements and the evaluation of ambulance transit times over 240 emergency runs using two different siren systems. Modifying existing siren sounds to add high frequency content would improve vehicle penetration, detectability and sound localization cues, and mounting the siren behind the radiator grill, rather than on the light bar or under the wheel arch, would provide less unwanted noise while maintaining or improving the effective distance in front of the vehicle. Ultimately, these considerations will benefit any new attempt to design auditory warnings for the emergency services.
Optimal design of a tidal turbine
NASA Astrophysics Data System (ADS)
Kueny, J. L.; Lalande, T.; Herou, J. J.; Terme, L.
2012-11-01
An optimal design procedure has been applied to improve the design of an open-center tidal turbine. A specific software developed in C++ enables to generate the geometry adapted to the specific constraints imposed to this machine. Automatic scripts based on the AUTOGRID, IGG, FINE/TURBO and CFView software of the NUMECA CFD suite are used to evaluate all the candidate geometries. This package is coupled with the optimization software EASY, which is based on an evolutionary strategy completed by an artificial neural network. A new technique is proposed to guarantee the robustness of the mesh in the whole range of the design parameters. An important improvement of the initial geometry has been obtained. To limit the whole CPU time necessary for this optimization process, the geometry of the tidal turbine has been considered as axisymmetric, with a uniform upstream velocity. A more complete model (12 M nodes) has been built in order to analyze the effects related to the sea bed boundary layer, the proximity of the sea surface, the presence of an important triangular basement supporting the turbine and a possible incidence of the upstream velocity.
A Cost Comparison Framework for Use in Optimizing Ground Water Pump and Treat Systems
This fact sheet has been prepared to provide a framework for conducting cost comparisons to evaluate whether or not to pursue potential opportunities from an optimization evaluation for improving, replacing, or supplementing the P&T system.
Study of a Fine Grained Threaded Framework Design
NASA Astrophysics Data System (ADS)
Jones, C. D.
2012-12-01
Traditionally, HEP experiments exploit the multiple cores in a CPU by having each core process one event. However, future PC designs are expected to use CPUs which double the number of processing cores at the same rate as the cost of memory falls by a factor of two. This effectively means the amount of memory per processing core will remain constant. This is a major challenge for LHC processing frameworks since the LHC is expected to deliver more complex events (e.g. greater pileup events) in the coming years while the LHC experiment's frameworks are already memory constrained. Therefore in the not so distant future we may need to be able to efficiently use multiple cores to process one event. In this presentation we will discuss a design for an HEP processing framework which can allow very fine grained parallelization within one event as well as supporting processing multiple events simultaneously while minimizing the memory footprint of the job. The design is built around the libdispatch framework created by Apple Inc. (a port for Linux is available) whose central concept is the use of task queues. This design also accommodates the reality that not all code will be thread safe and therefore allows one to easily mark modules or sub parts of modules as being thread unsafe. In addition, the design efficiently handles the requirement that events in one run must all be processed before starting to process events from a different run. After explaining the design we will provide measurements from simulating different processing scenarios where the processing times used for the simulation are drawn from processing times measured from actual CMS event processing.
A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
1998-01-01
This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.
Harmonizing and Optimizing Fish Testing Methods: The OECD Framework Project
The Organisation for Economic Cooperation and Development (OECD) serves a key role in the international harmonization of testing of a wide variety of chemicals. An integrated fish testing framework project was initiated in mid-2009 through the OECD with the US as the lead country...
An optimal stratified Simon two-stage design.
Parashar, Deepak; Bowden, Jack; Starr, Colin; Wernisch, Lorenz; Mander, Adrian
2016-07-01
In Phase II oncology trials, therapies are increasingly being evaluated for their effectiveness in specific populations of interest. Such targeted trials require designs that allow for stratification based on the participants' molecular characterisation. A targeted design proposed by Jones and Holmgren (JH) Jones CL, Holmgren E: 'An adaptive Simon two-stage design for phase 2 studies of targeted therapies', Contemporary Clinical Trials 28 (2007) 654-661.determines whether a drug only has activity in a disease sub-population or in the wider disease population. Their adaptive design uses results from a single interim analysis to decide whether to enrich the study population with a subgroup or not; it is based on two parallel Simon two-stage designs. We study the JH design in detail and extend it by providing a few alternative ways to control the familywise error rate, in the weak sense as well as the strong sense. We also introduce a novel optimal design by minimising the expected sample size. Our extended design contributes to the much needed framework for conducting Phase II trials in stratified medicine. © 2016 The Authors Pharmaceutical Statistics Published by John Wiley & Sons Ltd.
An Optimization Framework for Dynamic, Distributed Real-Time Systems
NASA Technical Reports Server (NTRS)
Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara
2003-01-01
Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.
Recommendations for the optimum design of pultruded frameworks
NASA Astrophysics Data System (ADS)
Mottram, J. T.
1994-09-01
For the optimum choice of pultruded beam members in frameworks there is a need to have a greater understanding of framework behavior under load. Research on the lateral-torsional buckling of a symmetric I-section has shown how much the resistance may be affected by the loading position and the support boundary conditions. By changing the warping at the connections from free, as assumed in the USA design manual, to fixed, as may be achieved with practical connection designs it is shown that there is a potential doubling in the buckling resistance. In addition, practical connections have some initial stiffness and moment resistance, thus the connections behave in a semirigid manner. This connection behavior makes inappropriate the present procedure for choosing beam sections on the basis of limiting deflection for a simply supported member. It is proposed that research be conducted to establish the potential of semirigid design, as now being used with structural steelwork. Results from such research should provide the first stage in the process for the optimum design of frameworks.
Machine Learning Techniques in Optimal Design
NASA Technical Reports Server (NTRS)
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution
Optimal design of biaxial tensile cruciform specimens
NASA Astrophysics Data System (ADS)
Demmerle, S.; Boehler, J. P.
1993-01-01
F OR EXPERIMENTAL investigations concerning the mechanical behaviour under biaxial stress states of rolled sheet metals, mostly cruciform flat specimens are used. By means of empirical methods, different specimen geometries have been proposed in the literature. In order to evaluate the suitability of a specimen design, a mathematically well defined criterion is developed, based on the standard deviations of the values of the stresses in the test section. Applied to the finite element method, the criterion is employed to realize the shape optimization of biaxial cruciform specimens for isotropic elastic materials. Furthermore, the performance of the obtained optimized specimen design is investigated in the case of off-axes tests on anisotropic materials. Therefore, for the first time, an original testing device, consisting of hinged fixtures with knife edges at each arm of the specimen, is applied to the biaxial test. The obtained results indicate the decisive superiority of the optimized specimens for the proper performance on isotropic materials, as well as the paramount importance of the proposed off-axes testing technique for biaxial tests on anisotropic materials.
Topology optimization design of a space mirror
NASA Astrophysics Data System (ADS)
Liu, Jiazhen; Jiang, Bo
2015-11-01
As key components of the optical system of the space optical remote sensor, Space mirrors' surface accuracy had a direct impact that couldn't be ignored of the imaging quality of the remote sensor. In the future, large-diameter mirror would become an important trend in the development of space optical technology. However, a sharp increase in the mirror diameter would cause the deformation of the mirror and increase the thermal deformation caused by temperature variations. A reasonable lightweight structure designed to ensure the optical performance of the system to meet the requirements was required. As a new type of lightweight approach, topology optimization technology was an important direction of the current space optical remote sensing technology research. The lightweight design of rectangular mirror was studied. the variable density method of topology optimization was used. The mirror type precision of the mirror assemblies was obtained in different conditions. PV value was less than λ/10 and RMS value was less than λ/50(λ = 632.8nm). The results show that the entire The mirror assemblies can achieve a sufficiently high static rigidity, dynamic stiffness and thermal stability and has the capability of sufficient resistance to external environmental interference . Key words: topology optimization, space mirror, lightweight, space optical remote sensor
Design search and optimization in aerospace engineering.
Keane, A J; Scanlan, J P
2007-10-15
In this paper, we take a design-led perspective on the use of computational tools in the aerospace sector. We briefly review the current state-of-the-art in design search and optimization (DSO) as applied to problems from aerospace engineering, focusing on those problems that make heavy use of computational fluid dynamics (CFD). This ranges over issues of representation, optimization problem formulation and computational modelling. We then follow this with a multi-objective, multi-disciplinary example of DSO applied to civil aircraft wing design, an area where this kind of approach is becoming essential for companies to maintain their competitive edge. Our example considers the structure and weight of a transonic civil transport wing, its aerodynamic performance at cruise speed and its manufacturing costs. The goals are low drag and cost while holding weight and structural performance at acceptable levels. The constraints and performance metrics are modelled by a linked series of analysis codes, the most expensive of which is a CFD analysis of the aerodynamics using an Euler code with coupled boundary layer model. Structural strength and weight are assessed using semi-empirical schemes based on typical airframe company practice. Costing is carried out using a newly developed generative approach based on a hierarchical decomposition of the key structural elements of a typical machined and bolted wing-box assembly. To carry out the DSO process in the face of multiple competing goals, a recently developed multi-objective probability of improvement formulation is invoked along with stochastic process response surface models (Krigs). This approach both mitigates the significant run times involved in CFD computation and also provides an elegant way of balancing competing goals while still allowing the deployment of the whole range of single objective optimizers commonly available to design teams.
A Tutorial on Adaptive Design Optimization
Myung, Jay I.; Cavagnaro, Daniel R.; Pitt, Mark A.
2013-01-01
Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond. PMID:23997275
Optimal interferometer designs for optical coherence tomography.
Rollins, A M; Izatt, J A
1999-11-01
We introduce a family of power-conserving fiber-optic interferometer designs for low-coherence reflectometry that use optical circulators, unbalanced couplers, and (or) balanced heterodyne detection. Simple design equations for optimization of the signal-to-noise ratio of the interferometers are expressed in terms of relevant signal and noise sources and measurable system parameters. We use the equations to evaluate the expected performance of the new configurations compared with that of the standard Michelson interferometer that is commonly used in optical coherence tomography (OCT) systems. The analysis indicates that improved sensitivity is expected for all the new interferometer designs, compared with the sensitivity of the standard OCT interferometer, under high-speed imaging conditions.
A user-interactive, response surface approximation-based framework for multidisciplinary design
NASA Astrophysics Data System (ADS)
Stelmack, Marc Andrew
Multidisciplinary Design Optimization (MDO) focuses on reducing the time and cost required to design complex engineering systems. One goal of MDO is to develop systematic approaches to design which are effective and reliable in achieving desired performance improvements. Also, the analysis of engineering systems is potentially expensive and time-consuming. Therefore, enhancing the efficiency of current design methods, in terms of the number of designs that must be evaluated, is desirable. A design framework, Concurrent Subspace Design (CSD), is proposed to address some issues that are prevalent in practical design settings. Previously considered methods of system approximation and optimization were extended to accommodate both discrete and continuous design variables. Additionally, the transition from one application to another was made to be as straightforward as possible in developing the associated software. Engineering design generally requires the expertise of numerous individuals, whose efforts must be focused on and coordinated in accordance with a consistent set of design goals. In CSD, system approximations in the form of artificial neural networks provide information pertaining to system performance characteristics. This information provides the basis for design decisions. The approximations enable different designers to operate concurrently and assess the impact of their decisions on the system design goals. The proposed framework was implemented to minimize the weight of an aircraft brake assembly. An existing industrial analysis tool was used to provide design information in that application. CSD was implemented in a user-interactive fashion that permitted human judgement to influence the design process and required minimal modifications to the analysis and design software. The implications of problem formulation and the role of human design experts in automated industrial design processes were explored in the context of that application. In the most
Designing optimal greenhouse gas monitoring networks for Australia
NASA Astrophysics Data System (ADS)
Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.
2016-01-01
Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.
SysSon - A Framework for Systematic Sonification Design
NASA Astrophysics Data System (ADS)
Vogt, Katharina; Goudarzi, Visda; Holger Rutz, Hanns
2015-04-01
SysSon is a research approach on introducing sonification systematically to a scientific community where it is not yet commonly used - e.g., in climate science. Thereby, both technical and socio-cultural barriers have to be met. The approach was further developed with climate scientists, who participated in contextual inquiries, usability tests and a workshop of collaborative design. Following from these extensive user tests resulted our final software framework. As frontend, a graphical user interface allows climate scientists to parametrize standard sonifications with their own data sets. Additionally, an interactive shell allows to code new sonifications for users competent in sound design. The framework is a standalone desktop application, available as open source (for details see http://sysson.kug.ac.at/) and works with data in NetCDF format.
A Connectivity Framework for Social Information Systems Design in Healthcare
Kuziemsky, Craig E.; Andreev, Pavel; Benyoucef, Morad; O'Sullivan, Tracey; Jamaly, Syam
2016-01-01
Social information systems (SISs) will play a key role in healthcare systems’ transformation into collaborative patient-centered systems that support care delivery across the entire continuum of care. SISs enable the development of collaborative networks andfacilitate relationships to integrate people and processes across time and space. However, we believe that a “connectivity” issue, which refers to the scope and extent of system requirements for a SIS, is a significant challenge of SIS design. This paper’s contribution is the development of the Social Information System Connectivity Framework for supporting SIS design in healthcare. The framework has three parts. First, it defines the structure of a SIS as a set of social triads. Second, it identifies six dimensions that represent the behaviour of a SIS. Third, it proposes the Social Information System Connectivity Factor as our approximation of the extent of connectivity and degree of complexity in a SIS. PMID:28269869
Final Project Report: A Polyhedral Transformation Framework for Compiler Optimization
Sadayappan, Ponnuswamy; Rountev, Atanas
2015-06-15
The project developed the polyhedral compiler transformation module PolyOpt/Fortran in the ROSE compiler framework. PolyOpt/Fortran performs automated transformation of affine loop nests within FORTRAN programs for enhanced data locality and parallel execution. A FORTAN version of the Polybench library was also developed by the project. A third development was a dynamic analysis approach to gauge vectorization potential within loops of programs; software (DDVec) for automated instrumentation and dynamic analysis of programs was developed.
Integration framework for design information of electromechanical systems
NASA Astrophysics Data System (ADS)
Qureshi, Sohail Mehboob
The objective of this research is to develop a framework that can be used to provide an integrated view of electromechanical system design information. The framework is intended to provide a platform where various standard and pseudo standard information models such as STEP and IBIS can be integrated to provide an integrated view of design information beyond just part numbers, CAD drawings, or some specific geometry. A database application can make use of this framework to provide reuse of design information fragments including geometry, function, behavior, design procedures, performance specification, design rationale, project management, product characteristics, and configuration and version. An "Integration Core Model" is developed to provide the basis for the integration framework, and also facilitate integration of product and process data for the purpose of archiving integrated design history. There are two major subdivisions of the integration core model: product core model providing the high level structure needed to associate process information to the product data, and process core model providing the generic process information that is needed to capture and organize process information. The process core model is developed using a hybrid of structure-oriented and process-oriented approaches to process modeling. Using such a scheme the process core model is able to represent information such as hierarchies of processes, logical and temporal relationships between various design activities, and relationships between activities and the product data at various levels of abstraction. Based upon the integration core model, an integration methodology is developed to provide a systematic way of integrating various information models. Mapping theorems have been developed to methodically point out the problems that may be encountered during the integration of two information models. The integration core model is validated through a case study. Design information
Design Principles for Covalent Organic Frameworks in Energy Storage Applications.
Alahakoon, Sampath B; Thompson, Christina M; Occhialini, Gino; Smaldone, Ronald Alexander
2017-03-16
Covalent organic frameworks (COFs) are an exciting class of microporous materials that have been explored as energy storage materials for more than a decade. This review will discusses the efforts to develop these materials for applications in gas and electrical power storage. This review will also discuss some of the design strategies for developing the gas sorption properties of COFs and mechanistic studies on their formation.
Design and architecture of the Mars relay network planning and analysis framework
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Lee, C. H.
2002-01-01
In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the
Multidisciplinary design optimization for sonic boom mitigation
NASA Astrophysics Data System (ADS)
Ozcer, Isik A.
product design. The simulation tools are used to optimize three geometries for sonic boom mitigation. The first is a simple axisymmetric shape to be used as a generic nose component, the second is a delta wing with lift, and the third is a real aircraft with nose and wing optimization. The objectives are to minimize the pressure impulse or the peak pressure in the sonic boom signal, while keeping the drag penalty under feasible limits. The design parameters for the meridian profile of the nose shape are the lengths and the half-cone angles of the linear segments that make up the profile. The design parameters for the lifting wing are the dihedral angle, angle of attack, non-linear span-wise twist and camber distribution. The test-bed aircraft is the modified F-5E aircraft built by Northrop Grumman, designated the Shaped Sonic Boom Demonstrator. This aircraft is fitted with an optimized axisymmetric nose, and the wings are optimized to demonstrate optimization for sonic boom mitigation for a real aircraft. The final results predict 42% reduction in bow shock strength, 17% reduction in peak Deltap, 22% reduction in pressure impulse, 10% reduction in foot print size, 24% reduction in inviscid drag, and no loss in lift for the optimized aircraft. Optimization is carried out using response surface methodology, and the design matrices are determined using standard DoE techniques for quadratic response modeling.
Design and applications of a multimodality image data warehouse framework.
Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.
ODIN: Optimal design integration system. [reusable launch vehicle design
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.
1975-01-01
The report provides a summary of the Optimal Design Integration (ODIN) System as it exists at Langley Research Center. A discussion of the ODIN System, the executive program and the data base concepts are presented. Two examples illustrate the capabilities of the system which have been exploited. Appended to the report are a summary of abstracts for the ODIN library programs and a description of the use of the executive program in linking the library programs.
Database Design for Structural Analysis and Design Optimization.
1984-10-01
C.C. Wu 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT, PROJECT , TASK AREA & WORK UNIT NUMBERS Applied-Optimal Design Laboratory...relational algebraic operations such as PROJECT , JOIN, and SELECT can be used to form new relations. Figure 2.1 shows a typical relational model of data...data set contains the definition of mathematical 3-D surfaces of up to third order to which lines and grids may be projected . The surfaces are defined in
A Human Factors Framework for Payload Display Design
NASA Technical Reports Server (NTRS)
Dunn, Mariea C.; Hutchinson, Sonya L.
1998-01-01
During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.
NASA Astrophysics Data System (ADS)
Tran, T.
With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.
A method for nonlinear optimization with discrete design variables
NASA Technical Reports Server (NTRS)
Olsen, Gregory R.; Vanderplaats, Garret N.
1987-01-01
A numerical method is presented for the solution of nonlinear discrete optimization problems. The applicability of discrete optimization to engineering design is discussed, and several standard structural optimization problems are solved using discrete design variables. The method uses approximation techniques to create subproblems suitable for linear mixed-integer programming methods. The method employs existing software for continuous optimization and integer programming.
Design, optimization, and control of tensegrity structures
NASA Astrophysics Data System (ADS)
Masic, Milenko
The contributions of this dissertation may be divided into four categories. The first category involves developing a systematic form-finding method for general and symmetric tensegrity structures. As an extension of the available results, different shape constraints are incorporated in the problem. Methods for treatment of these constraints are considered and proposed. A systematic formulation of the form-finding problem for symmetric tensegrity structures is introduced, and it uses the symmetry to reduce both the number of equations and the number of variables in the problem. The equilibrium analysis of modular tensegrities exploits their peculiar symmetry. The tensegrity similarity transformation completes the contributions in the area of enabling tools for tensegrity form-finding. The second group of contributions develops the methods for optimal mass-to-stiffness-ratio design of tensegrity structures. This technique represents the state-of-the-art for the static design of tensegrity structures. It is an extension of the results available for the topology optimization of truss structures. Besides guaranteeing that the final design satisfies the tensegrity paradigm, the problem constrains the structure from different modes of failure, which makes it very general. The open-loop control of the shape of modular tensegrities is the third contribution of the dissertation. This analytical result offers a closed form solution for the control of the reconfiguration of modular structures. Applications range from the deployment and stowing of large-scale space structures to the locomotion-inducing control for biologically inspired structures. The control algorithm is applicable regardless of the size of the structures, and it represents a very general result for a large class of tensegrities. Controlled deployments of large-scale tensegrity plates and tensegrity towers are shown as examples that demonstrate the full potential of this reconfiguration strategy. The last
Automated design of multiphase space missions using hybrid optimal control
NASA Astrophysics Data System (ADS)
Chilan, Christian Miguel
A modern space mission is assembled from multiple phases or events such as impulsive maneuvers, coast arcs, thrust arcs and planetary flybys. Traditionally, a mission planner would resort to intuition and experience to develop a sequence of events for the multiphase mission and to find the space trajectory that minimizes propellant use by solving the associated continuous optimal control problem. This strategy, however, will most likely yield a sub-optimal solution, as the problem is sophisticated for several reasons. For example, the number of events in the optimal mission structure is not known a priori and the system equations of motion change depending on what event is current. In this work a framework for the automated design of multiphase space missions is presented using hybrid optimal control (HOC). The method developed uses two nested loops: an outer-loop that handles the discrete dynamics and finds the optimal mission structure in terms of the categorical variables, and an inner-loop that performs the optimization of the corresponding continuous-time dynamical system and obtains the required control history. Genetic algorithms (GA) and direct transcription with nonlinear programming (NLP) are introduced as methods of solution for the outer-loop and inner-loop problems, respectively. Automation of the inner-loop, continuous optimal control problem solver, required two new technologies. The first is a method for the automated construction of the NLP problems resulting from the use of a direct solver for systems with different structures, including different numbers of categorical events. The method assembles modules, consisting of parameters and constraints appropriate to each event, sequentially according to the given mission structure. The other new technology is for a robust initial guess generator required by the inner-loop NLP problem solver. Two new methods were developed for cases including low-thrust trajectories. The first method, based on GA
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that
Inter occasion variability in individual optimal design.
Kristoffersson, Anders N; Friberg, Lena E; Nyberg, Joakim
2015-12-01
Inter occasion variability (IOV) is of importance to consider in the development of a design where individual pharmacokinetic or pharmacodynamic parameters are of interest. IOV may adversely affect the precision of maximum a posteriori (MAP) estimated individual parameters, yet the influence of inclusion of IOV in optimal design for estimation of individual parameters has not been investigated. In this work two methods of including IOV in the maximum a posteriori Fisher information matrix (FIMMAP) are evaluated: (i) MAP occ-the IOV is included as a fixed effect deviation per occasion and individual, and (ii) POP occ-the IOV is included as an occasion random effect. Sparse sampling schedules were designed for two test models and compared to a scenario where IOV is ignored, either by omitting known IOV (Omit) or by mimicking a situation where unknown IOV has inflated the IIV (Inflate). Accounting for IOV in the FIMMAP markedly affected the designs compared to ignoring IOV and, as evaluated by stochastic simulation and estimation, resulted in superior precision in the individual parameters. In addition MAPocc and POP occ accurately predicted precision and shrinkage. For the investigated designs, the MAP occ method was on average slightly superior to POP occ and was less computationally intensive.
Incorporating User Preferences Within an Optimal Traffic Flow Management Framework
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio; Sheth, Kapil S.; Guiterrez-Nolasco, Sebastian Armardo
2010-01-01
The effectiveness of future decision support tools for Traffic Flow Management in the National Airspace System will depend on two major factors: computational burden and collaboration. Previous research has focused separately on these two aspects without consideration of their interaction. In this paper, their explicit combination is examined. It is shown that when user preferences are incorporated with an optimal approach to scheduling, runtime is not adversely affected. A benefit-cost ratio is used to measure the influence of user preferences on an optimal solution. This metric shows user preferences can be accommodated without inordinately, negatively affecting the overall system delay. Specifically, incorporating user preferences will increase delays proportionally to increased user satisfaction.
Optimal patch code design via device characterization
NASA Astrophysics Data System (ADS)
Wu, Wencheng; Dalal, Edul N.
2012-01-01
In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.
Global optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Arora, Jasbir S.
1990-01-01
The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.
An enhanced BSIM modeling framework for selfheating aware circuit design
NASA Astrophysics Data System (ADS)
Schleyer, M.; Leuschner, S.; Baumgartner, P.; Mueller, J.-E.; Klar, H.
2014-11-01
This work proposes a modeling framework to enhance the industry-standard BSIM4 MOSFET models with capabilities for coupled electro-thermal simulations. An automated simulation environment extracts thermal information from model data as provided by the semiconductor foundry. The standard BSIM4 model is enhanced with a Verilog-A based wrapper module, adding thermal nodes which can be connected to a thermal-equivalent RC network. The proposed framework allows a fully automated extraction process based on the netlist of the top-level design and the model library. A numerical analysis tool is used to control the extraction flow and to obtain all required parameters. The framework is used to model self-heating effects on a fully integrated class A/AB power amplifier (PA) designed in a standard 65 nm CMOS process. The PA is driven with +30 dBm output power, leading to an average temperature rise of approximately 40 °C over ambient temperature.
Optimal Designs of Staggered Dean Vortex Micromixers
Chen, Jyh Jian; Chen, Chun Huei; Shie, Shian Ruei
2011-01-01
A novel parallel laminar micromixer with a two-dimensional staggered Dean Vortex micromixer is optimized and fabricated in our study. Dean vortices induced by centrifugal forces in curved rectangular channels cause fluids to produce secondary flows. The split-and-recombination (SAR) structures of the flow channels and the impinging effects result in the reduction of the diffusion distance of two fluids. Three different designs of a curved channel micromixer are introduced to evaluate the mixing performance of the designed micromixer. Mixing performances are demonstrated by means of a pH indicator using an optical microscope and fluorescent particles via a confocal microscope at different flow rates corresponding to Reynolds numbers (Re) ranging from 0.5 to 50. The comparison between the experimental data and numerical results shows a very reasonable agreement. At a Re of 50, the mixing length at the sixth segment, corresponding to the downstream distance of 21.0 mm, can be achieved in a distance 4 times shorter than when the Re equals 1. An optimization of this micromixer is performed with two geometric parameters. These are the angle between the lines from the center to two intersections of two consecutive curved channels, θ, and the angle between two lines of the centers of three consecutive curved channels, ϕ. It can be found that the maximal mixing index is related to the maximal value of the sum of θ and ϕ, which is equal to 139.82°. PMID:21747691
Ecohydrology frameworks for green infrastructure design and ecosystem service provision
NASA Astrophysics Data System (ADS)
Pavao-Zuckerman, M.; Knerl, A.; Barron-Gafford, G.
2014-12-01
Urbanization is a dominant form of landscape change that affects the structure and function of ecosystems and alters control points in biogeochemical and hydrologic cycles. Green infrastructure (GI) has been proposed as a solution to many urban environmental challenges and may be a way to manage biogeochemical control points. Despite this promise, there has been relatively limited empirical focus to evaluate the efficacy of GI, relationships between design and function, and the ability of GI to provide ecosystem services in cities. This work has been driven by goals of adapting GI approaches to dryland cities and to harvest rain and storm water for providing ecosystem services related to storm water management and urban heat island mitigation, as well as other co-benefits. We will present a modification of ecohydrologic theory for guiding the design and function of green infrastructure for dryland systems that highlights how GI functions in context of Trigger - Transfer - Reserve - Pulse (TTRP) dynamic framework. Here we also apply this TTRP framework to observations of established street-scape green infrastructure in Tucson, AZ, and an experimental installation of green infrastructure basins on the campus of Biosphere 2 (Oracle, AZ) where we have been measuring plant performance and soil biogeochemical functions. We found variable sensitivity of microbial activity, soil respiration, N-mineralization, photosynthesis and respiration that was mediated both by elements of basin design (soil texture and composition, choice of surface mulches) and antecedent precipitation inputs and soil moisture conditions. The adapted TTRP framework and field studies suggest that there are strong connections between design and function that have implications for stormwater management and ecosystem service provision in dryland cities.
CFD based draft tube hydraulic design optimization
NASA Astrophysics Data System (ADS)
McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.
2014-03-01
The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a
A Framework for Designing Scaffolds That Improve Motivation and Cognition
Belland, Brian R.; Kim, ChanMin; Hannafin, Michael J.
2013-01-01
A problematic, yet common, assumption among educational researchers is that when teachers provide authentic, problem-based experiences, students will automatically be engaged. Evidence indicates that this is often not the case. In this article, we discuss (a) problems with ignoring motivation in the design of learning environments, (b) problem-based learning and scaffolding as one way to help, (c) how scaffolding has strayed from what was originally equal parts motivational and cognitive support, and (d) a conceptual framework for the design of scaffolds that can enhance motivation as well as cognitive outcomes. We propose guidelines for the design of computer-based scaffolds to promote motivation and engagement while students are solving authentic problems. Remaining questions and suggestions for future research are then discussed. PMID:24273351
Seed Design Framework for Mapping SOLiD Reads
NASA Astrophysics Data System (ADS)
Noé, Laurent; Gîrdea, Marta; Kucherov, Gregory
The advent of high-throughput sequencing technologies constituted a major advance in genomic studies, offering new prospects in a wide range of applications. We propose a rigorous and flexible algorithmic solution to mapping SOLiD color-space reads to a reference genome. The solution relies on an advanced method of seed design that uses a faithful probabilistic model of read matches and, on the other hand, a novel seeding principle especially adapted to read mapping. Our method can handle both lossy and lossless frameworks and is able to distinguish, at the level of seed design, between SNPs and reading errors. We illustrate our approach by several seed designs and demonstrate their efficiency.
NASA Technical Reports Server (NTRS)
Hyland, D. C.; Bernstein, D. S.
1987-01-01
The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.
Three-dimensional hemodynamic design optimization of stents for cerebral aneurysms.
Lee, Chang-Joon; Srinivas, Karkenahalli; Qian, Yi
2014-03-01
Flow-diverting stents occlude aneurysms by diverting the blood flow from entering the aneurysm sac. Their effectiveness is determined by the thrombus formation rate, which depends greatly on stent design. The aim of this study was to provide a general framework for efficient stent design using design optimization methods, with a focus on stent hemodynamics as the starting point. Kriging method was used for completing design optimization. Three different cases of idealized stents were considered, and 40-60 samples from each case were evaluated using computational fluid dynamics. Using maximum velocity and vorticity reduction as objective functions, the optimized designs were identified from the samples. A number of optimized stent designs have been found from optimization, which revealed that a combination of high pore density and thin struts is desired. Additionally, distributing struts near the proximal end of aneurysm neck was found to be effective. The success of the methods and framework devised in this study offers a future possibility of incorporating other disciplines to carry out multidisciplinary design optimization.
NASA Astrophysics Data System (ADS)
Poirier, Vincent
Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.
NASA Astrophysics Data System (ADS)
Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.
2013-10-01
The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.
Constrained Aeroacoustic Shape Optimization Using the Surrogate Management Framework
NASA Technical Reports Server (NTRS)
Marsden, Alison L.; Wang, Meng; Dennis, John E., Jr.
2003-01-01
Reduction of noise generated by turbulent flow past the trailing-edge of a lifting surface is a challenge in many aeronautical and naval applications. Numerical predictions of trailing-edge noise necessitate the use of advanced simulation techniques such as large-eddy simulation (LES) in order to capture a wide range of turbulence scales which are the source of broadband noise. Aeroacoustic calculations of the flow over a model airfoil trailing edge using LES and aeroacoustic theory have been presented in Wang and Moin and were shown to agree favorably with experiments. The goal of the present work is to apply shape optimization to the trailing edge flow previously studied, in order to control aerodynamic noise.
Optimizing Monitoring Designs under Alternative Objectives
Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; ...
2014-12-31
This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across amore » set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.« less
Optimal Ground Source Heat Pump System Design
Ozbek, Metin; Yavuzturk, Cy; Pinder, George
2015-04-01
Despite the facts that GSHPs first gained popularity as early as the 1940’s and they can achieve 30 to 60 percent in energy savings and carbon emission reductions relative to conventional HVAC systems, the use of geothermal energy in the U.S. has been less than 1 percent of the total energy consumption. The key barriers preventing this technically-mature technology from reaching its full commercial potential have been its high installation cost and limited consumer knowledge and trust in GSHP systems to deliver the technology in a cost-effective manner in the market place. Led by ENVIRON, with support from University Hartford and University of Vermont, the team developed and tested a software-based a decision making tool (‘OptGSHP’) for the least-cost design of ground-source heat pump (‘GSHP’) systems. OptGSHP combines state of the art optimization algorithms with GSHP-specific HVAC and groundwater flow and heat transport simulation. The particular strength of OptGSHP is in integrating heat transport due to groundwater flow into the design, which most of the GSHP designs do not get credit for and therefore are overdesigned.
ERIC Educational Resources Information Center
Lee, Sung Heum; Boling, Elizabeth
1999-01-01
Identifies guidelines from the literature relating to screen design and design of interactive instructional materials. Describes two types of guidelines--those aimed at enhancing motivation and those aimed at preventing loss of motivation--for typography, graphics, color, and animation and audio. Proposes a framework for considering motivation in…
A Robust Control Design Framework for Substructure Models
NASA Technical Reports Server (NTRS)
Lim, Kyong B.
1994-01-01
A framework for designing control systems directly from substructure models and uncertainties is proposed. The technique is based on combining a set of substructure robust control problems by an interface stiffness matrix which appears as a constant gain feedback. Variations of uncertainties in the interface stiffness are treated as a parametric uncertainty. It is shown that multivariable robust control can be applied to generate centralized or decentralized controllers that guarantee performance with respect to uncertainties in the interface stiffness, reduced component modes and external disturbances. The technique is particularly suited for large, complex, and weakly coupled flexible structures.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Space tourism optimized reusable spaceplane design
NASA Astrophysics Data System (ADS)
Penn, Jay P.; Lindley, Charles A.
1997-01-01
Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about $240 per pound ($529/kg), or $72,000 per passenger round-trip, goals should be about $50 per pound ($110/kg) or approximately $15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle's ability to also satisfy the traditional spacelift market is shown.
Kwok, T; Smith, K A
2000-09-01
The aim of this paper is to study both the theoretical and experimental properties of chaotic neural network (CNN) models for solving combinatorial optimization problems. Previously we have proposed a unifying framework which encompasses the three main model types, namely, Chen and Aihara's chaotic simulated annealing (CSA) with decaying self-coupling, Wang and Smith's CSA with decaying timestep, and the Hopfield network with chaotic noise. Each of these models can be represented as a special case under the framework for certain conditions. This paper combines the framework with experimental results to provide new insights into the effect of the chaotic neurodynamics of each model. By solving the N-queen problem of various sizes with computer simulations, the CNN models are compared in different parameter spaces, with optimization performance measured in terms of feasibility, efficiency, robustness and scalability. Furthermore, characteristic chaotic neurodynamics crucial to effective optimization are identified, together with a guide to choosing the corresponding model parameters.
Three Program Architecture for Design Optimization
NASA Technical Reports Server (NTRS)
Miura, Hirokazu; Olson, Lawrence E. (Technical Monitor)
1998-01-01
In this presentation, I would like to review historical perspective on the program architecture used to build design optimization capabilities based on mathematical programming and other numerical search techniques. It is rather straightforward to classify the program architecture in three categories as shown above. However, the relative importance of each of the three approaches has not been static, instead dynamically changing as the capabilities of available computational resource increases. For example, we considered that the direct coupling architecture would never be used for practical problems, but availability of such computer systems as multi-processor. In this presentation, I would like to review the roles of three architecture from historical as well as current and future perspective. There may also be some possibility for emergence of hybrid architecture. I hope to provide some seeds for active discussion where we are heading to in the very dynamic environment for high speed computing and communication.
Optimal screening designs for biomedical technology
Torney, D.C.; Bruno, W.J.; Knill, E.
1997-10-01
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Screening a large number of different types of molecules to isolate a few with desirable properties is essential in biomedical technology. For example, trying to find a particular gene in the Human genome could be akin to looking for a needle in a haystack. Fortunately, testing of mixtures, or pools, of molecules allows the desirable ones to be identified, using a number of experiments proportional only to the logarithm of the total number of experiments proportional only to the logarithm of the total number of types of molecules. We show how to capitalize upon this potential by using optimize pooling schemes, or designs. We propose efficient non-adaptive pooling designs, such as {open_quotes}random sets{close_quotes} designs and modified {open_quotes}row and column{close_quotes} designs. Our results have been applied in the pooling and unique-sequence screening of clone libraries used in the Human Genome Project and in the mapping of Human chromosome 16. This required the use of liquid-transferring robots and manifolds--for the largest clone libraries. Finally, we developed an efficient technique for finding the posterior probability each molecule has the desirable property, given the pool assay results. This technique works well, in practice, even if there are substantial rates of errors in the pool assay data. Both our methods and our results are relevant to a broad spectrum of research in modern biology.
A Robust Kalman Framework with Resampling and Optimal Smoothing
Kautz, Thomas; Eskofier, Bjoern M.
2015-01-01
The Kalman filter (KF) is an extremely powerful and versatile tool for signal processing that has been applied extensively in various fields. We introduce a novel Kalman-based analysis procedure that encompasses robustness towards outliers, Kalman smoothing and real-time conversion from non-uniformly sampled inputs to a constant output rate. These features have been mostly treated independently, so that not all of their benefits could be exploited at the same time. Here, we present a coherent analysis procedure that combines the aforementioned features and their benefits. To facilitate utilization of the proposed methodology and to ensure optimal performance, we also introduce a procedure to calculate all necessary parameters. Thereby, we substantially expand the versatility of one of the most widely-used filtering approaches, taking full advantage of its most prevalent extensions. The applicability and superior performance of the proposed methods are demonstrated using simulated and real data. The possible areas of applications for the presented analysis procedure range from movement analysis over medical imaging, brain-computer interfaces to robot navigation or meteorological studies. PMID:25734647
Microgravity isolation system design: A modern control synthesis framework
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.
1994-01-01
Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. A general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.
Microgravity isolation system design: A modern control synthesis framework
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.
1994-01-01
Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. In this paper a general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.
Eldred, M.S.; Hart, W.E.; Bohnhoff, W.J.; Romero, V.J.; Hutchinson, S.A.; Salinger, A.G.
1996-08-01
the benefits of applying optimization to computational models are well known, but their range of widespread application to date has been limited. This effort attempts to extend the disciplinary areas to which optimization algorithms may be readily applied through the development and application of advanced optimization strategies capable of handling the computational difficulties associated with complex simulation codes. Towards this goal, a flexible software framework is under continued development for the application of optimization techniques to broad classes of engineering applications, including those with high computational expense and nonsmooth, nonconvex design space features. Object-oriented software design with C++ has been employed as a tool in providing a flexible, extensible, and robust multidisciplinary toolkit with computationally intensive simulations. In this paper, demonstrations of advanced optimization strategies using the software are presented in the hybridization and parallel processing research areas. Performance of the advanced strategies is compared with a benchmark nonlinear programming optimization.
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Chip Design Process Optimization Based on Design Quality Assessment
NASA Astrophysics Data System (ADS)
Häusler, Stefan; Blaschke, Jana; Sebeke, Christian; Rosenstiel, Wolfgang; Hahn, Axel
2010-06-01
Nowadays, the managing of product development projects is increasingly challenging. Especially the IC design of ASICs with both analog and digital components (mixed-signal design) is becoming more and more complex, while the time-to-market window narrows at the same time. Still, high quality standards must be fulfilled. Projects and their status are becoming less transparent due to this complexity. This makes the planning and execution of projects rather difficult. Therefore, there is a need for efficient project control. A main challenge is the objective evaluation of the current development status. Are all requirements successfully verified? Are all intermediate goals achieved? Companies often develop special solutions that are not reusable in other projects. This makes the quality measurement process itself less efficient and produces too much overhead. The method proposed in this paper is a contribution to solve these issues. It is applied at a German design house for analog mixed-signal IC design. This paper presents the results of a case study and introduces an optimized project scheduling on the basis of quality assessment results.
Microgravity isolation system design: A modern control analysis framework
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.
1994-01-01
Many acceleration-sensitive, microgravity science experiments will require active vibration isolation from the manned orbiters on which they will be mounted. The isolation problem, especially in the case of a tethered payload, is a complex three-dimensional one that is best suited to modern-control design methods. These methods, although more powerful than their classical counterparts, can nonetheless go only so far in meeting the design requirements for practical systems. Once a tentative controller design is available, it must still be evaluated to determine whether or not it is fully acceptable, and to compare it with other possible design candidates. Realistically, such evaluation will be an inherent part of a necessary iterative design process. In this paper, an approach is presented for applying complex mu-analysis methods to a closed-loop vibration isolation system (experiment plus controller). An analysis framework is presented for evaluating nominal stability, nominal performance, robust stability, and robust performance of active microgravity isolation systems, with emphasis on the effective use of mu-analysis methods.
An Integrated Framework Advancing Membrane Protein Modeling and Design
Weitzner, Brian D.; Duran, Amanda M.; Tilley, Drew C.; Elazar, Assaf; Gray, Jeffrey J.
2015-01-01
Membrane proteins are critical functional molecules in the human body, constituting more than 30% of open reading frames in the human genome. Unfortunately, a myriad of difficulties in overexpression and reconstitution into membrane mimetics severely limit our ability to determine their structures. Computational tools are therefore instrumental to membrane protein structure prediction, consequently increasing our understanding of membrane protein function and their role in disease. Here, we describe a general framework facilitating membrane protein modeling and design that combines the scientific principles for membrane protein modeling with the flexible software architecture of Rosetta3. This new framework, called RosettaMP, provides a general membrane representation that interfaces with scoring, conformational sampling, and mutation routines that can be easily combined to create new protocols. To demonstrate the capabilities of this implementation, we developed four proof-of-concept applications for (1) prediction of free energy changes upon mutation; (2) high-resolution structural refinement; (3) protein-protein docking; and (4) assembly of symmetric protein complexes, all in the membrane environment. Preliminary data show that these algorithms can produce meaningful scores and structures. The data also suggest needed improvements to both sampling routines and score functions. Importantly, the applications collectively demonstrate the potential of combining the flexible nature of RosettaMP with the power of Rosetta algorithms to facilitate membrane protein modeling and design. PMID:26325167
Application of numerical optimization to rotor aerodynamic design
NASA Technical Reports Server (NTRS)
Pleasants, W. A., III; Wiggins, T. J.
1984-01-01
Based on initial results obtained from the performance optimization code, a number of observations can be made regarding the utility of optimization codes in supporting design of rotors for improved performance. (1) The primary objective of improving the productivity and responsiveness of current design methods can be met. (2) The use of optimization allows the designer to consider a wider range of design variables in a greatly compressed time period. (3) Optimization requires the user to carefully define his problem to avoid unproductive use of computer resources. (4) Optimization will increase the burden on the analyst to validate designs and to improve the accuracy of analysis methods. (5) Direct calculation of finite difference derivatives by the optimizer was not prohibitive for this application but was expensive. Approximate analysis in some form would be considered to improve program response time. (6) Program developement is not complete and will continue to evolve to integrate new analysis methods, design problems, and alternate optimizer options.
Optimal design of artificial reefs for sturgeon
NASA Astrophysics Data System (ADS)
Yarbrough, Cody; Cotel, Aline; Kleinheksel, Abby
2015-11-01
The Detroit River, part of a busy corridor between Lakes Huron and Erie, was extensively modified to create deep shipping channels, resulting in a loss of spawning habitat for lake sturgeon and other native fish (Caswell et al. 2004, Bennion and Manny 2011). Under the U.S.- Canada Great Lakes Water Quality Agreement, there are remediation plans to construct fish spawning reefs to help with historic habitat losses and degraded fish populations, specifically sturgeon. To determine optimal reef design, experimental work has been undertaken. Different sizes and shapes of reefs are tested for a given set of physical conditions, such as flow depth and flow velocity, matching the relevant dimensionless parameters dominating the flow physics. The physical conditions are matched with the natural conditions encountered in the Detroit River. Using Particle Image Velocimetry, Acoustic Doppler Velocimetry and dye studies, flow structures, vorticity and velocity gradients at selected locations have been identified and quantified to allow comparison with field observations and numerical model results. Preliminary results are helping identify the design features to be implemented in the next phase of reef construction. Sponsored by NOAA.
NASA Astrophysics Data System (ADS)
Maschio, Célio; José Schiozer, Denis
2015-01-01
In this article, a new optimization framework to reduce uncertainties in petroleum reservoir attributes using artificial intelligence techniques (neural network and genetic algorithm) is proposed. Instead of using the deterministic values of the reservoir properties, as in a conventional process, the parameters of the probability density function of each uncertain attribute are set as design variables in an optimization process using a genetic algorithm. The objective function (OF) is based on the misfit of a set of models, sampled from the probability density function, and a symmetry factor (which represents the distribution of curves around the history) is used as weight in the OF. Artificial neural networks are trained to represent the production curves of each well and the proxy models generated are used to evaluate the OF in the optimization process. The proposed method was applied to a reservoir with 16 uncertain attributes and promising results were obtained.
Communication Optimizations for a Wireless Distributed Prognostic Framework
NASA Technical Reports Server (NTRS)
Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2009-01-01
Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.
Space tourism optimized reusable spaceplane design
Penn, J.P.; Lindley, C.A.
1997-01-01
Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about {dollar_sign}240 per pound ({dollar_sign}529/kg), or {dollar_sign}72,000 per passenger round-trip, goals should be about {dollar_sign}50 per pound ({dollar_sign}110/kg) or approximately {dollar_sign}15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle{close_quote}s ability to also satisfy the traditional spacelift market is shown. {copyright} {ital 1997 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2016-09-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?
NASA Technical Reports Server (NTRS)
Moore, Greg; Chainyk, Mike; Schiermeier, John
2004-01-01
The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.
Design Time Optimization for Hardware Watermarking Protection of HDL Designs
Castillo, E.; Morales, D. P.; García, A.; Parrilla, L.; Todorovich, E.; Meyer-Baese, U.
2015-01-01
HDL-level design offers important advantages for the application of watermarking to IP cores, but its complexity also requires tools automating these watermarking algorithms. A new tool for signature distribution through combinational logic is proposed in this work. IPP@HDL, a previously proposed high-level watermarking technique, has been employed for evaluating the tool. IPP@HDL relies on spreading the bits of a digital signature at the HDL design level using combinational logic included within the original system. The development of this new tool for the signature distribution has not only extended and eased the applicability of this IPP technique, but it has also improved the signature hosting process itself. Three algorithms were studied in order to develop this automated tool. The selection of a cost function determines the best hosting solutions in terms of area and performance penalties on the IP core to protect. An 1D-DWT core and MD5 and SHA1 digital signatures were used in order to illustrate the benefits of the new tool and its optimization related to the extraction logic resources. Among the proposed algorithms, the alternative based on simulated annealing reduces the additional resources while maintaining an acceptable computation time and also saving designer effort and time. PMID:25861681
Design of Optimal Cyclers Using Solar Sails
2002-12-01
necessary (but not sufficient ) conditions for optimality in these cases. Moreover, the optimal control solution is the one where H is minimized...problems, 1H = − for all time. The first and second order necessary (but not sufficient ) conditions for optimality using the Hamiltonian are written as... optimization and the initial conditions , the path of the sail could be propagated by means of a numeric ordinary differential equation solver on the non
Interactive computer program for optimal designs of longitudinal cohort studies.
Tekle, Fetene B; Tan, Frans E S; Berger, Martijn P F
2009-05-01
Many large scale longitudinal cohort studies have been carried out or are ongoing in different fields of science. Such studies need a careful planning to obtain the desired quality of results with the available resources. In the past, a number of researches have been performed on optimal designs for longitudinal studies. However, there was no computer program yet available to help researchers to plan their longitudinal cohort design in an optimal way. A new interactive computer program for the optimization of designs of longitudinal cohort studies is therefore presented. The computer program helps users to identify the optimal cohort design with an optimal number of repeated measurements per subject and an optimal allocations of time points within a given study period. Further, users can compute the loss in relative efficiencies of any other alternative design compared to the optimal one. The computer program is described and illustrated using a practical example.
Integrated Layout and Support Structure Optimization for Offshore Wind Farm Design
NASA Astrophysics Data System (ADS)
Ashuri, T.; Ponnurangam, C.; Zhang, J.; Rotea, M.
2016-09-01
This paper develops a multidisciplinary design optimization framework for integrated design optimization of offshore wind farm layout and support structure. A computational model is developed to characterize the physics of the wind farm wake, aerodynamic and hydrodynamic loads, response of the support structure to these loads, soil- structure interaction, as well as different cost elements. Levelized cost of energy is introduced as the objective function. The design constraints are the farm external boundary, and support structure buckling, first modal-frequency, fatigue damage and ultimate stresses. To evaluate the effectiveness of the proposed approach, four optimization scenarios are considered: a feasible baseline design, optimization of layout only, optimization of support structure only, and integrated design of the layout and support structure. Compared to the baseline design, the optimization results show that the isolated support structure design reduces the levelized cost of energy by 0.6%, the isolated layout design reduces the levelized cost of energy by 2.0%, and the integrated layout and support structure design reduces the levelized cost of energy by 2.6%.
Multidisciplinary design optimization - An emerging new engineering discipline
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1993-01-01
A definition of the multidisciplinary design optimization (MDO) is introduced, and functionality and relationship of the MDO conceptual components are examined. The latter include design-oriented analysis, approximation concepts, mathematical system modeling, design space search, an optimization procedure, and a humane interface.
Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon
2009-02-01
Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations (Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions ("production of cereals," "resistance to soil erosion by water," and "landscape water retention"). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.
NASA Astrophysics Data System (ADS)
Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon
2009-02-01
Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations ( Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions (“production of cereals,” “resistance to soil erosion by water,” and “landscape water retention”). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.
A robust optimization methodology for preliminary aircraft design
NASA Astrophysics Data System (ADS)
Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.
2016-05-01
This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.
Mugdh, Mrinal; Pilla, Satya
2012-01-01
Health care providers in the United States are constantly faced with the enormous challenge of optimizing their revenue cycle to improve their overall financial performance. These challenges keep evolving in both scope and complexity owing to a host of internal and external factors. Furthermore, given the lack of control that health care providers have over external factors, any attempt to successfully optimize the revenue cycle hinges on several critical improvements aimed at realigning the internal factors. This study provides an integrated change management model that aims to reengineer and realign the people-process-technology framework by using the principles of lean and Six Sigma for revenue cycle optimization.
INSTITUTIONALIZING SAFEGUARDS-BY-DESIGN: HIGH-LEVEL FRAMEWORK
Trond Bjornard PhD; Joseph Alexander; Robert Bean; Brian Castle; Scott DeMuth, Ph.D.; Phillip Durst; Michael Ehinger; Prof. Michael Golay, Ph.D.; Kevin Hase, Ph.D.; David J. Hebditch, DPhil; John Hockert, Ph.D.; Bruce Meppen; James Morgan; Jerry Phillips, Ph.D., PE
2009-02-01
participation in facility design options analysis in the conceptual design phase to enhance intrinsic features, among others. The SBD process is unlikely to be broadly applied in the absence of formal requirements to do so, or compelling evidence of its value. Neither exists today. A formal instrument to require the application of SBD is needed and would vary according to both the national and regulatory environment. Several possible approaches to implementation of the requirements within the DOE framework are explored in this report. Finally, there are numerous barriers to the implementation of SBD, including the lack of a strong safeguards culture, intellectual property concerns, the sensitive nature of safeguards information, and the potentially divergent or conflicting interests of participants in the process. In terms of SBD implementation in the United States, there are no commercial nuclear facilities that are under IAEA safeguards. Efforts to institutionalize SBD must address these issues. Specific work in FY09 could focus on the following: finalizing the proposed SBD process for use by DOE and performing a pilot application on a DOE project in the planning phase; developing regulatory options for mandating SBD; further development of safeguards-related design guidance, principles and requirements; development of a specific SBD process tailored to the NRC environment; and development of an engagement strategy for the IAEA and other international partners.
RTM And VARTM Design, Optimization, And Control With SLIC
2003-07-02
UD-CCM l 2 July 2003 1 RTM AND VARTM DESIGN, OPTIMIZATION, AND CONTROL WITH SLIC Kuang-Ting Hsiao UD-CCM Report Documentation Page Form ApprovedOMB...COVERED - 4. TITLE AND SUBTITLE RTM And VARTM Design, Optimization, And Control With SLIC 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ONR Workshop - 5 Simulation-based Liquid Injection Control: Philosophy SLIC Artificial Intelligence Optimized Design For RTM / VARTM Sensors
Active cooling design for scramjet engines using optimization methods
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.; Martin, Carl J.; Lucas, Stephen H.
1988-01-01
A methodology for using optimization in designing metallic cooling jackets for scramjet engines is presented. The optimal design minimizes the required coolant flow rate subject to temperature, mechanical-stress, and thermal-fatigue-life constraints on the cooling-jacket panels, and Mach-number and pressure contraints on the coolant exiting the panel. The analytical basis for the methodology is presented, and results for the optimal design of panels are shown to demonstrate its utility.
Active cooling design for scramjet engines using optimization methods
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.; Martin, Carl J.; Lucas, Stephen H.
1988-01-01
A methodology for using optimization in designing metallic cooling jackets for scramjet engines is presented. The optimal design minimizes the required coolant flow rate subject to temperature, mechanical-stress, and thermal-fatigue-life constraints on the cooling-jacket panels, and Mach-number and pressure constraints on the coolant exiting the panel. The analytical basis for the methodology is presented, and results for the optimal design of panels are shown to demonstrate its utility.
Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease
NASA Astrophysics Data System (ADS)
Marsden, Alison
2009-11-01
Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Framework for Implementing Engineering Senior Design Capstone Courses and Design Clinics
ERIC Educational Resources Information Center
Franchetti, Matthew; Hefzy, Mohamed Samir; Pourazady, Mehdi; Smallman, Christine
2012-01-01
Senior design capstone projects for engineering students are essential components of an undergraduate program that enhances communication, teamwork, and problem solving skills. Capstone projects with industry are well established in management, but not as heavily utilized in engineering. This paper outlines a general framework that can be used by…
Architectural Design and the Learning Environment: A Framework for School Design Research
ERIC Educational Resources Information Center
Gislason, Neil
2010-01-01
This article develops a theoretical framework for studying how instructional space, teaching and learning are related in practice. It is argued that a school's physical design can contribute to the quality of the learning environment, but several non-architectural factors also determine how well a given facility serves as a setting for teaching…
An Integrated Framework for Parameter-based Optimization of Scientific Workflows
Kumar, Vijay S.; Sadayappan, P.; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel
2011-01-01
Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework. PMID:22068617
The Optimal Decay Estimates on the Framework of Besov Spaces for Generally Dissipative Systems
NASA Astrophysics Data System (ADS)
Xu, Jiang; Kawashima, Shuichi
2015-10-01
We give a new decay framework for the general dissipative hyperbolic system and the hyperbolic-parabolic composite system, which allows us to pay less attention to the traditional spectral analysis in comparison with previous efforts. New ingredients lie in the high-frequency and low-frequency decomposition of a pseudo-differential operator and an interpolation inequality related to homogeneous Besov spaces of negative order. Furthermore, we develop the Littlewood-Paley pointwise energy estimates and new time-weighted energy functionals to establish optimal decay estimates on the framework of spatially critical Besov spaces for the degenerately dissipative hyperbolic system of balance laws. Based on the embedding and the improved Gagliardo-Nirenberg inequality, the optimal decay rates and decay rates are further shown. Finally, as a direct application, the optimal decay rates for three dimensional damped compressible Euler equations are also obtained.
NASA Astrophysics Data System (ADS)
Irfan, Muhammad; Bilal Khurshid, Muhammad; Bai, Qiang; Labi, Samuel; Morin, Thomas L.
2012-05-01
This article presents a framework and an illustrative example for identifying the optimal pavement maintenance and rehabilitation (M&R) strategy using a mixed-integer nonlinear programming model. The objective function is to maximize the cost-effectiveness expressed as the ratio of the effectiveness to the cost. The constraints for the optimization problem are related to performance, budget, and choice. Two different formulations of effectiveness are derived using treatment-specific performance models for each constituent treatment of the strategy; and cost is expressed in terms of the agency and user costs over the life cycle. The proposed methodology is demonstrated using a case study. Probability distributions are established for the optimization input variables and Monte Carlo simulations are carried out to yield optimal solutions. Using the results of these simulations, M&R strategy contours are developed as a novel tool that can help pavement managers quickly identify the optimal M&R strategy for a given pavement section.
NASA Astrophysics Data System (ADS)
Gavrishchaka, Valeriy V.; Kovbasinskaya, Maria; Monina, Maria
2008-11-01
Novelty detection is a very desirable additional feature of any practical classification or forecasting system. Novelty and rare patterns detection is the main objective in such applications as fault/abnormality discovery in complex technical and biological systems, fraud detection and risk management in financial and insurance industry. Although many interdisciplinary approaches for rare event modeling and novelty detection have been proposed, significant data incompleteness due to the nature of the problem makes it difficult to find a universal solution. Even more challenging and much less formalized problem is novelty detection in complex strategies and models where practical performance criteria are usually multi-objective and the best state-of-the-art solution is often not known due to the complexity of the task and/or proprietary nature of the application area. For example, it is much more difficult to detect a series of small insider trading or other illegal transactions mixed with valid operations and distributed over long time period according to a well-designed strategy than a single, large fraudulent transaction. Recently proposed boosting-based optimization was shown to be an effective generic tool for the discovery of stable multi-component strategies/models from the existing parsimonious base strategies/models in financial and other applications. Here we outline how the same framework can be used for novelty and fraud detection in complex strategies and models.
Minimax D-Optimal Designs for Item Response Theory Models.
ERIC Educational Resources Information Center
Berger, Martjin P. F.; King, C. Y. Joy; Wong, Weng Kee
2000-01-01
Proposed minimax designs for item response theory (IRT) models to overcome the problem of local optimality. Compared minimax designs to sequentially constructed designs for the two parameter logistic model. Results show that minimax designs can be nearly as efficient as sequentially constructed designs. (Author/SLD)
Optimal Management and Design of Energy Systems under Atmospheric Uncertainty
NASA Astrophysics Data System (ADS)
Anitescu, M.; Constantinescu, E. M.; Zavala, V.
2010-12-01
optimal management and design of energy systems, such as the power grid or building systems, under atmospheric conditions uncertainty. The framework is defined in terms of a mathematical paradigm called stochastic programming: minimization of the expected value of the decision-makers objective function subject to physical and operational constraints, such as low blackout porbability, that are enforced on each scenario. We report results on testing the framework on the optimal management of power grid systems under high wind penetration scenarios, a problem whose time horizon is in the order of days. We discuss the computational effort of scenario generation which involves running WRF at high spatio-temporal resolution dictated by the operational constraints as well as solving the optimal dispatch problem. We demonstrate that accounting for uncertainty in atmospheric conditions results in blackout prevention, whereas decisions using only mean forecast does not. We discuss issues in using the framework for planning problems, whose time horizon is of several decades and what requirements this problem would entail from climate simulation systems.
Optimal design for nonlinear estimation of the hemodynamic response function.
Maus, Bärbel; van Breukelen, Gerard J P; Goebel, Rainer; Berger, Martijn P F
2012-06-01
Subject-specific hemodynamic response functions (HRFs) have been recommended to capture variation in the form of the hemodynamic response between subjects (Aguirre et al., [ 1998]: Neuroimage 8:360-369). The purpose of this article is to find optimal designs for estimation of subject-specific parameters for the double gamma HRF. As the double gamma function is a nonlinear function of its parameters, optimal design theory for nonlinear models is employed in this article. The double gamma function is linearized by a Taylor approximation and the maximin criterion is used to handle dependency of the D-optimal design on the expansion point of the Taylor approximation. A realistic range of double gamma HRF parameters is used for the expansion point of the Taylor approximation. Furthermore, a genetic algorithm (GA) (Kao et al., [ 2009]: Neuroimage 44:849-856) is applied to find locally optimal designs for the different expansion points and the maximin design chosen from the locally optimal designs is compared to maximin designs obtained by m-sequences, blocked designs, designs with constant interstimulus interval (ISI) and random event-related designs. The maximin design obtained by the GA is most efficient. Random event-related designs chosen from several generated designs and m-sequences have a high efficiency, while blocked designs and designs with a constant ISI have a low efficiency compared to the maximin GA design.
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
Multidisciplinary design optimization: An emerging new engineering discipline
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1993-01-01
This paper defines the Multidisciplinary Design Optimization (MDO) as a new field of research endeavor and as an aid in the design of engineering systems. It examines the MDO conceptual components in relation to each other and defines their functions.
Malikopoulos, Andreas
2015-01-01
The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion. Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.
Malikopoulos, Andreas
2015-01-01
The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.more » Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.« less
INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN
The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...
A design optimization process for Space Station Freedom
NASA Technical Reports Server (NTRS)
Chamberlain, Robert G.; Fox, George; Duquette, William H.
1990-01-01
The Space Station Freedom Program is used to develop and implement a process for design optimization. Because the relative worth of arbitrary design concepts cannot be assessed directly, comparisons must be based on designs that provide the same performance from the point of view of station users; such designs can be compared in terms of life cycle cost. Since the technology required to produce a space station is widely dispersed, a decentralized optimization process is essential. A formulation of the optimization process is provided and the mathematical models designed to facilitate its implementation are described.
New Approaches to HSCT Multidisciplinary Design and Optimization
NASA Technical Reports Server (NTRS)
Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh
1999-01-01
New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and
Antenna Design Using the Efficient Global Optimization (EGO) Algorithm
2011-05-20
small antennas in a parasitic super directive array configuration. (b) A comparison of the driven super directive gain achievable with these...we discuss antenna design optimization using EGO. The first antenna design is a parasitic super directive array where we compare EGO with a classic...In Section 4 (RESULTS AND DISCUSSION) we present design optimizations for parasitic, super directive arrays; wideband antenna design; and the
Optimizing the Design of Computer Classrooms: The Physical Environment.
ERIC Educational Resources Information Center
Huffman, Heather B.; Jernstedt, G. Christian; Reed, Virginia A.; Reber, Emily S.; Burns, Mathew B.; Oostenink, Richard J.; Williams, Margot T.
2003-01-01
Suggests two guiding principles as a framework to interpret the research findings of environmental psychology that focus on effective classroom design: effective design promotes attention in the classroom and allows for periodic shifts of learner activities. Examines these principles as they apply to the design of a computer classroom, reviewing…
Reusable rocket engine intelligent control system framework design, phase 2
NASA Technical Reports Server (NTRS)
Nemeth, ED; Anderson, Ron; Ols, Joe; Olsasky, Mark
1991-01-01
Elements of an advanced functional framework for reusable rocket engine propulsion system control are presented for the Space Shuttle Main Engine (SSME) demonstration case. Functional elements of the baseline functional framework are defined in detail. The SSME failure modes are evaluated and specific failure modes identified for inclusion in the advanced functional framework diagnostic system. Active control of the SSME start transient is investigated, leading to the identification of a promising approach to mitigating start transient excursions. Key elements of the functional framework are simulated and demonstration cases are provided. Finally, the advanced function framework for control of reusable rocket engines is presented.
NASA Technical Reports Server (NTRS)
Braun, R. D.; Kroo, I. M.
1995-01-01
Collaborative optimization is a design architecture applicable in any multidisciplinary analysis environment but specifically intended for large-scale distributed analysis applications. In this approach, a complex problem is hierarchically de- composed along disciplinary boundaries into a number of subproblems which are brought into multidisciplinary agreement by a system-level coordination process. When applied to problems in a multidisciplinary design environment, this scheme has several advantages over traditional solution strategies. These advantageous features include reducing the amount of information transferred between disciplines, the removal of large iteration-loops, allowing the use of different subspace optimizers among the various analysis groups, an analysis framework which is easily parallelized and can operate on heterogenous equipment, and a structural framework that is well-suited for conventional disciplinary organizations. In this article, the collaborative architecture is developed and its mathematical foundation is presented. An example application is also presented which highlights the potential of this method for use in large-scale design applications.
Formulation of a parametric systems design framework for disaster response planning
NASA Astrophysics Data System (ADS)
Mma, Stephanie Weiya
The occurrence of devastating natural disasters in the past several years have prompted communities, responding organizations, and governments to seek ways to improve disaster preparedness capabilities locally, regionally, nationally, and internationally. A holistic approach to design used in the aerospace and industrial engineering fields enables efficient allocation of resources through applied parametric changes within a particular design to improve performance metrics to selected standards. In this research, this methodology is applied to disaster preparedness, using a community's time to restoration after a disaster as the response metric. A review of the responses from Hurricane Katrina and the 2010 Haiti earthquake, among other prominent disasters, provides observations leading to some current capability benchmarking. A need for holistic assessment and planning exists for communities but the current response planning infrastructure lacks a standardized framework and standardized assessment metrics. Within the humanitarian logistics community, several different metrics exist, enabling quantification and measurement of a particular area's vulnerability. These metrics, combined with design and planning methodologies from related fields, such as engineering product design, military response planning, and business process redesign, provide insight and a framework from which to begin developing a methodology to enable holistic disaster response planning. The developed methodology was applied to the communities of Shelby County, TN and pre-Hurricane-Katrina Orleans Parish, LA. Available literature and reliable media sources provide information about the different values of system parameters within the decomposition of the community aspects and also about relationships among the parameters. The community was modeled as a system dynamics model and was tested in the implementation of two, five, and ten year improvement plans for Preparedness, Response, and Development
A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning
NASA Astrophysics Data System (ADS)
Basdekas, L.; Stewart, N.; Triana, E.
2013-12-01
Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd.
Progress in multidisciplinary design optimization at NASA Langley
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
1993-01-01
Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.
Integration of Physical Design and Sequential Optimization
2006-03-06
synchronous digital cir- cuits. A sufficient condition for the solution to the optimal clock scheduling problem in the face of process variations is given... Optimization For Lagrangian Dual 1: k← 0 2: x,y← argminx,y L(x,y,k) 3: while KKT conditions are not satisfied do 4: k←max(0,k+ γ ·g(x,y)) 5: x,y← argminx,y L(x...associated with the clock distribution tree in a digital synchronous circuit. A model for this problem and a sufficient condition for its optimal solution is
Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.
2014-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Furlanetto, Sandra; Orlandini, Serena; Pasquini, Benedetta; Caprini, Claudia; Mura, Paola; Pinzauti, Sergio
2015-10-01
A fast capillary zone electrophoresis method for the simultaneous analysis of glibenclamide and its impurities (I(A) and I(B)) in pharmaceutical dosage forms was fully developed within a quality by design framework. Critical quality attributes were represented by I(A) peak efficiency, critical resolution between glibenclamide and I(B), and analysis time. Experimental design was efficiently used for rapid and systematic method optimization. A 3(5)//16 symmetric screening matrix was chosen for investigation of the five selected critical process parameters throughout the knowledge space, and the results obtained were the basis for the planning of the subsequent response surface study. A Box-Behnken design for three factors allowed the contour plots to be drawn and the design space to be identified by introduction of the concept of probability. The design space corresponded to the multidimensional region where all the critical quality attributes reached the desired values with a degree of probability π ≥ 90%. Under the selected working conditions, the full separation of the analytes was obtained in less than 2 min. A full factorial design simultaneously allowed the design space to be validated and method robustness to be tested. A control strategy was finally implemented by means of a system suitability test. The method was fully validated and was applied to real samples of glibenclamide tablets.
Optimal fractional order PID design via Tabu Search based algorithm.
Ateş, Abdullah; Yeroglu, Celaleddin
2016-01-01
This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method.
Topology and boundary shape optimization as an integrated design tool
NASA Technical Reports Server (NTRS)
Bendsoe, Martin Philip; Rodrigues, Helder Carrico
1990-01-01
The optimal topology of a two dimensional linear elastic body can be computed by regarding the body as a domain of the plane with a high density of material. Such an optimal topology can then be used as the basis for a shape optimization method that computes the optimal form of the boundary curves of the body. This results in an efficient and reliable design tool, which can be implemented via common FEM mesh generator and CAD type input-output facilities.
Design and Optimization of Composite Gyroscope Momentum Wheel Rings
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.
Design optimization of an opposed piston brake caliper
NASA Astrophysics Data System (ADS)
Sergent, Nicolas; Tirovic, Marko; Voveris, Jeronimas
2014-11-01
Successful brake caliper designs must be light and stiff, preventing excessive deformation and extended brake pedal travel. These conflicting requirements are difficult to optimize owing to complex caliper geometry, loading and interaction of individual brake components (pads, disc and caliper). The article studies a fixed, four-pot (piston) caliper, and describes in detail the computer-based topology optimization methodology applied to obtain two optimized designs. At first sight, relatively different designs (named 'Z' and 'W') were obtained by minor changes to the designable volume and boundary conditions. However, on closer inspection, the same main bridge design features could be recognized. Both designs offered considerable reduction of caliper mass, by 19% and 28%, respectively. Further finite element analyses conducted on one of the optimized designs (Z caliper) showed which individual bridge features and their combinations are the most important in maintaining caliper stiffness.
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft
Kreitler, Jason; Stoms, David M; Davis, Frank W
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
An uncertain multidisciplinary design optimization method using interval convex models
NASA Astrophysics Data System (ADS)
Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong
2013-06-01
This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.
A study of commuter airplane design optimization
NASA Technical Reports Server (NTRS)
Roskam, J.; Wyatt, R. D.; Griswold, D. A.; Hammer, J. L.
1977-01-01
Problems of commuter airplane configuration design were studied to affect a minimization of direct operating costs. Factors considered were the minimization of fuselage drag, methods of wing design, and the estimated drag of an airplane submerged in a propellor slipstream; all design criteria were studied under a set of fixed performance, mission, and stability constraints. Configuration design data were assembled for application by a computerized design methodology program similar to the NASA-Ames General Aviation Synthesis Program.
Framework GRASP: routine library for optimize processing of aerosol remote sensing observation
NASA Astrophysics Data System (ADS)
Fuertes, David; Torres, Benjamin; Dubovik, Oleg; Litvinov, Pavel; Lapyonok, Tatyana; Ducos, Fabrice; Aspetsberger, Michael; Federspiel, Christian
The present the development of a Framework for the Generalized Retrieval of Aerosol and Surface Properties (GRASP) developed by Dubovik et al., (2011). The framework is a source code project that attempts to strengthen the value of the GRASP inversion algorithm by transforming it into a library that will be used later for a group of customized application modules. The functions of the independent modules include the managing of the configuration of the code execution, as well as preparation of the input and output. The framework provides a number of advantages in utilization of the code. First, it implements loading data to the core of the scientific code directly from memory without passing through intermediary files on disk. Second, the framework allows consecutive use of the inversion code without the re-initiation of the core routine when new input is received. These features are essential for optimizing performance of the data production in processing of large observation sets, such as satellite images by the GRASP. Furthermore, the framework is a very convenient tool for further development, because this open-source platform is easily extended for implementing new features. For example, it could accommodate loading of raw data directly onto the inversion code from a specific instrument not included in default settings of the software. Finally, it will be demonstrated that from the user point of view, the framework provides a flexible, powerful and informative configuration system.
Design optimization of a magnetorheological brake in powered knee orthosis
NASA Astrophysics Data System (ADS)
Ma, Hao; Liao, Wei-Hsin
2015-04-01
Magneto-rheological (MR) fluids have been utilized in devices like orthoses and prostheses to generate controllable braking torque. In this paper, a flat shape rotary MR brake is designed for powered knee orthosis to provide adjustable resistance. Multiple disk structure with interior inner coil is adopted in the MR brake configuration. In order to increase the maximal magnetic flux, a novel internal structure design with smooth transition surface is proposed. Based on this design, a parameterized model of the MR brake is built for geometrical optimization. Multiple factors are considered in the optimization objective: braking torque, weight, and, particularly, average power consumption. The optimization is then performed with Finite Element Analysis (FEA), and the optimal design is obtained among the Pareto-optimal set considering the trade-offs in design objectives.
Optimal input design for aircraft instrumentation systematic error estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1991-01-01
A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.
Optimal shielding design for minimum materials cost or mass
Woolley, Robert D.
2015-12-02
The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very small changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.
Gearbox design for uncertain load requirements using active robust optimization
NASA Astrophysics Data System (ADS)
Salomon, Shaul; Avigad, Gideon; Purshouse, Robin C.; Fleming, Peter J.
2016-04-01
Design and optimization of gear transmissions have been intensively studied, but surprisingly the robustness of the resulting optimal design to uncertain loads has never been considered. Active Robust (AR) optimization is a methodology to design products that attain robustness to uncertain or changing environmental conditions through adaptation. In this study the AR methodology is utilized to optimize the number of transmissions, as well as their gearing ratios, for an uncertain load demand. The problem is formulated as a bi-objective optimization problem where the objectives are to satisfy the load demand in the most energy efficient manner and to minimize production cost. The results show that this approach can find a set of robust designs, revealing a trade-off between energy efficiency and production cost. This can serve as a useful decision-making tool for the gearbox design process, as well as for other applications.
Optimal shielding design for minimum materials cost or mass
Woolley, Robert D.
2015-12-02
The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less
A Hierarchical Biology Concept Framework: A Tool for Course Design
Khodor, Julia; Halme, Dina Gould; Walker, Graham C.
2004-01-01
A typical undergraduate biology curriculum covers a very large number of concepts and details. We describe the development of a Biology Concept Framework (BCF) as a possible way to organize this material to enhance teaching and learning. Our BCF is hierarchical, places details in context, nests related concepts, and articulates concepts that are inherently obvious to experts but often difficult for novices to grasp. Our BCF is also cross-referenced, highlighting interconnections between concepts. We have found our BCF to be a versatile tool for design, evaluation, and revision of course goals and materials. There has been a call for creating Biology Concept Inventories, multiple-choice exams that test important biology concepts, analogous to those in physics, astronomy, and chemistry. We argue that the community of researchers and educators must first reach consensus about not only what concepts are important to test, but also how the concepts should be organized and how that organization might influence teaching and learning. We think that our BCF can serve as a catalyst for community-wide discussion on organizing the vast number of concepts in biology, as a model for others to formulate their own BCFs and as a contribution toward the creation of a comprehensive BCF. PMID:15257339
Optimizing spacecraft design - optimization engine development : progress and plans
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Feather, Martin S.; Dunphy, Julia R; Salcedo, Jose; Menzies, Tim
2003-01-01
At JPL and NASA, a process has been developed to perform life cycle risk management. This process requires users to identify: goals and objectives to be achieved (and their relative priorities), the various risks to achieving those goals and objectives, and options for risk mitigation (prevention, detection ahead of time, and alleviation). Risks are broadly defined to include the risk of failing to design a system with adequate performance, compatibility and robustness in addition to more traditional implementation and operational risks. The options for mitigating these different kinds of risks can include architectural and design choices, technology plans and technology back-up options, test-bed and simulation options, engineering models and hardware/software development techniques and other more traditional risk reduction techniques.
A unified framework for chaotic neural-network approaches to combinatorial optimization.
Kwok, T; Smith, K A
1999-01-01
As an attempt to provide an organized way to study the chaotic structures and their effects in solving combinatorial optimization with chaotic neural networks (CNN's), a unifying framework is proposed to serve as a basis where the existing CNN models can be placed and compared. The key of this proposed framework is the introduction of an extra energy term into the computational energy of the Hopfield model, which takes on different forms for different CNN models, and modifies the original Hopfield energy landscape in various manners. Three CNN models, namely the Chen and Aihara model with self-feedback chaotic simulated annealing (CSA), the Wang and Smith model with timestep CSA, and the chaotic noise model, are chosen as examples to show how they can be classified and compared within the proposed framework.
An Instructional Design Framework for Fostering Student Engagement in Online Learning Environments
ERIC Educational Resources Information Center
Czerkawski, Betul C.; Lyman, Eugene W.
2016-01-01
Many approaches, models and frameworks exist when designing quality online learning environments. These approaches assist and guide instructional designers through the process of analysis, design, development, implementation and evaluation of instructional processes. Some of these frameworks are concerned with student participation, some with…
A Kernel Machine Framework for Feature Optimization in Multi-frequency Sonar Imagery
2006-09-01
best performance with the fewest number of basis Table III OBJECT MEASUREMENTS USED TO CONSTRUCT FEATURES Textural Measurements GLCM Contrast...each object in each image, 17 measurements are made and used to construct the features . Of these 17 measurements, 5 are textural based and are...A Kernel Machine Framework for Feature Optimization in Multi-frequency Sonar Imagery J.R. Stack and R. Arrieta Naval Surface Warfare Center
Spacecraft design optimization using Taguchi analysis
NASA Technical Reports Server (NTRS)
Unal, Resit
1991-01-01
The quality engineering methods of Dr. Genichi Taguchi, employing design of experiments, are important statistical tools for designing high quality systems at reduced cost. The Taguchi method was utilized to study several simultaneous parameter level variations of a lunar aerobrake structure to arrive at the lightest weight configuration. Finite element analysis was used to analyze the unique experimental aerobrake configurations selected by Taguchi method. Important design parameters affecting weight and global buckling were identified and the lowest weight design configuration was selected.
Optimal experiment design for identification of large space structures
NASA Technical Reports Server (NTRS)
Bayard, D. S.; Hadaegh, F. Y.; Meldrum, D. R.
1988-01-01
The optimal experiment design for on-orbit identification of modal frequency and damping parameters in large flexible space structures is discussed. The main result is a separation principle for D-optimal design which states that under certain conditions the sensor placement problem is decoupled from the input design problem. This decoupling effect significantly simplifies the overall optimal experiment design determination for large MIMO structural systems with many unknown modal parameters. The error from using the uncoupled design is estimated in terms of the inherent damping of the structure. A numerical example is given, demonstrating the usefulness of the simplified criteria in determining optimal designs for on-orbit Space Station identification experiments.
Optimal design of multi-conditions for axial flow pump
NASA Astrophysics Data System (ADS)
Shi, L. J.; Tang, F. P.; Liu, C.; Xie, R. S.; Zhang, W. P.
2016-11-01
Passage components of the pump device will have a negative flow state when axial pump run off the design condition. Combined with model tests of axial flow pump, this paper use numerical simulation and numerical optimization techniques, and change geometric design parameters of the impeller to optimal design of multi conditions for Axial Flow Pump, in order to improve the efficiency of non-design conditions, broad the high efficient district and reduce operating cost. The results show that, efficiency curve of optimized significantly wider than the initial one without optimization. The efficiency of low flow working point increased by about 2.6%, the designed working point increased by about 0.5%, and the high flow working point increased the most, about 7.4%. The change range of head is small, so all working point can meet the operational requirements. That will greatly reduce operating costs and shorten the period of optimal design. This paper adopted the CFD simulation as the subject analysis, combined with experiment study, instead of artificial way of optimization design with experience, which proves the reliability and efficiency of the optimization design of multi-operation conditions of axial-flow pump device.
Optimizing Experimental Designs: Finding Hidden Treasure.
Technology Transfer Automated Retrieval System (TEKTRAN)
Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...
Synthetic Gene Design Using Codon Optimization On-Line (COOL).
Yu, Kai; Ang, Kok Siong; Lee, Dong-Yup
2017-01-01
Codon optimization has been widely used for designing native or synthetic genes to enhance their expression in heterologous host organisms. We recently developed Codon Optimization On-Line (COOL) which is a web-based tool to provide multi-objective codon optimization functionality for synthetic gene design. COOL provides a simple and flexible interface for customizing codon optimization based on several design parameters such as individual codon usage, codon pairing, and codon adaptation index. User-defined sequences can also be compared against the COOL optimized ones to show the extent by which the user's sequences can be evaluated and further improved. The utility of COOL is demonstrated via a case study where the codon optimized sequence of an invertase enzyme is generated for the enhanced expression in E. coli.
Wang, Wei; Slepčev, Dejan; Basu, Saurav; Ozolek, John A.
2012-01-01
Transportation-based metrics for comparing images have long been applied to analyze images, especially where one can interpret the pixel intensities (or derived quantities) as a distribution of ‘mass’ that can be transported without strict geometric constraints. Here we describe a new transportation-based framework for analyzing sets of images. More specifically, we describe a new transportation-related distance between pairs of images, which we denote as linear optimal transportation (LOT). The LOT can be used directly on pixel intensities, and is based on a linearized version of the Kantorovich-Wasserstein metric (an optimal transportation distance, as is the earth mover’s distance). The new framework is especially well suited for computing all pairwise distances for a large database of images efficiently, and thus it can be used for pattern recognition in sets of images. In addition, the new LOT framework also allows for an isometric linear embedding, greatly facilitating the ability to visualize discriminant information in different classes of images. We demonstrate the application of the framework to several tasks such as discriminating nuclear chromatin patterns in cancer cells, decoding differences in facial expressions, galaxy morphologies, as well as sub cellular protein distributions. PMID:23729991
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients.
An Lq-Lp optimization framework for image reconstruction of electrical resistance tomography
NASA Astrophysics Data System (ADS)
Zhao, Jia; Xu, Yanbin; Dong, Feng
2014-12-01
Image reconstruction in electrical resistance tomography (ERT) is an ill-posed and nonlinear problem, which is easily affected by measurement noise. The regularization method with L2 constraint term or L1 constraint term is often used to solve the inverse problem of ERT. It shows that the reconstruction method with L2 regularization puts smoothness to obtain stability in the image reconstruction process, which is blurry at the interface of different conductivities. The regularization method with L1 norm is powerful at dealing with the over-smoothing effects, which is beneficial in obtaining a sharp transaction in conductivity distribution. To find the reason for these effects, an Lq-Lp optimization framework (1 ⩽ q ⩽ 2, 1 ⩽ p ⩽ 2) for the image reconstruction of ERT is presented in this paper. The Lq-Lp optimization framework is solved based on an approximation handling with Gauss-Newton iteration algorithm. The optimization framework is tested for image reconstruction of ERT with different models and the effects of the Lp regularization term on the quality of the reconstructed images are discussed with both simulation and experiment. By comparing the reconstructed results with different p in the regularization term, it is found that a large penalty is implemented on small data in the solution when p is small and a lesser penalty is implemented on small data in the solution when p is larger. It also makes the reconstructed images smoother and more easily affected by noise when p is larger.
Optimal design of a composite structure
NASA Technical Reports Server (NTRS)
Graesser, D. L.; Zabinsky, Z. B.; Tuttle, M. E.; Kim, G. I.
1993-01-01
This paper presents a design methodology for a laminated composite stiffened panel, subjected to multiple in-plane loads and bending moments. Design variables include the skin and stiffener ply orientation angles and stiffener geometry variables. Optimum designs are sought which minimize structural weight and satisfy mechanical performance requirements. Two types of mechanical performance requirements are placed on the panel, maximum strain and minimum strength. Minimum weight designs are presented which document that the choice of mechanical performance requirements cause changes in the optimum design. The effects of lay-up constraints which limit the ply angles to user specified values, such as symmetric or quasi-isotropic laminates, are also investigated.
Trajectory optimization software for planetary mission design
NASA Technical Reports Server (NTRS)
D'Amario, Louis A.
1989-01-01
The development history and characteristics of the interactive trajectory-optimization programs MOSES (D'Amario et al., 1981) and PLATO (D'Amario et al., 1982) are briefly reviewed, with an emphasis on their application to the Galileo mission. The requirements imposed by a mission involving flybys of several planetary satellites or planets are discussed; the formulation of the parameter-optimization problem is outlined; and particular attention is given to the use of multiconic methods to model the gravitational attraction of Jupiter in MOSES. Diagrams and tables of numerical data are included.
Vasquez Osorio, Eliana M.; Hoogeman, Mischa S.; Bondar, Luiza; Levendag, Peter C.; Heijmen, Ben J. M.
2009-07-15
Technical improvements in planning and dose delivery and in verification of patient positioning have substantially widened the therapeutic window for radiation treatment of cancer. However, changes in patient anatomy during the treatment limit the exploitation of these new techniques. To further improve radiation treatments, anatomical changes need to be modeled and accounted for. Nonrigid registration can be used for this purpose. This article describes the design, the implementation, and the validation of a new framework for nonrigid registration for radiotherapy applications. The core of this framework is an improved version of the thin plate spline robust point matching (TPS-RPM) algorithm. The TPS-RPM algorithm estimates a global correspondence and a transformation between the points that represent organs of interest belonging to two image sets. However, the algorithm does not allow for the inclusion of prior knowledge on the correspondence of subset of points, and therefore, it can lead to inconsistent anatomical solutions. In this article TPS-RPM was improved by employing a novel correspondence filter that supports simultaneous registration of multiple structures. The improved method allows for coherent organ registration and for the inclusion of user-defined landmarks, lines, and surfaces inside and outside of structures of interest. A procedure to generate control points from segmented organs is described. The framework parameters r and {lambda}, which control the number of points and the nonrigidness of the transformation, respectively, were optimized for three sites with different degrees of deformation (head and neck, prostate, and cervix) using two cases per site. For the head and neck cases, the salivary glands were manually contoured on CT scans, for the prostate cases the prostate and the vesicles, and for the cervix cases the cervix uterus, the bladder, and the rectum. The transformation error obtained using the best set of parameters was below 1
NASA Astrophysics Data System (ADS)
Leahy, Michael
Morphing an aircraft wingtip can provide substantial performance improvement. Most civil transport aircraft are optimized for range but for other flight conditions such as take-off and climb they are used as constraints. These constraints could potentially reduce the performance of an aircraft at cruise. By altering the shape of the wingtip, we can force the load distribution to adapt to the required flight condition to improve performance. Using a Variable Geometry Truss Mechanism (VGTM) concept to morph the wingtip of an aircraft with a Multidisciplinary Design Optimization (MDO) framework, the current work will attempt to find an optimal wing and wingtip shape to minimize fuel consumption for multiple morphing stages during cruise. This optimization routine was conducted with a Particle Swarm Optimization (PSO) algorithm using different fidelity tools to analyze the aerodynamic and structural disciplines.
Multidisciplinary design optimization of mechatronic vehicles with active suspensions
NASA Astrophysics Data System (ADS)
He, Yuping; McPhee, John
2005-05-01
A multidisciplinary optimization method is applied to the design of mechatronic vehicles with active suspensions. The method is implemented in a GA-A'GEM-MATLAB simulation environment in such a way that the linear mechanical vehicle model is designed in a multibody dynamics software package, i.e. A'GEM, the controllers and estimators are constructed using linear quadratic Gaussian (LQG) method, and Kalman filter algorithm in Matlab, then the combined mechanical and control model is optimized simultaneously using a genetic algorithm (GA). The design variables include passive parameters and control parameters. In the numerical optimizations, both random and deterministic road inputs and both perfect measurement of full state variables and estimated limited state variables are considered. Optimization results show that the active suspension systems based on the multidisciplinary optimization method have better overall performance than those derived using conventional design methods with the LQG algorithm.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
Rethinking modeling framework design: object modeling system 3.0
Technology Transfer Automated Retrieval System (TEKTRAN)
The Object Modeling System (OMS) is a framework for environmental model development, data provisioning, testing, validation, and deployment. It provides a bridge for transferring technology from the research organization to the program delivery agency. The framework provides a consistent and efficie...
Optimizing Your K-5 Engineering Design Challenge
ERIC Educational Resources Information Center
Coppola, Matthew Perkins; Merz, Alice H.
2017-01-01
Today, elementary school teachers continue to revisit old lessons and seek out new ones, especially in engineering. Optimization is the process by which an existing product or procedure is revised and refined. Drawn from the authors' experiences working directly with students in grades K-5 and their teachers and preservice teachers, the…
Origami Optimization: Role of Symmetry in Accelerating Design
NASA Astrophysics Data System (ADS)
Buskohl, Philip; Fuchi, Kazuko; Bazzan, Giorgio; Durstock, Michael; Reich, Gregory; Joo, James; Vaia, Richard
Origami structures morph between 2D and 3D conformations along predetermined fold lines that efficiently program the form, function and mobility of the structure. Design optimization tools have recently been developed to predict optimal fold patterns with mechanics-based metrics, such as the maximal energy storage, auxetic response and actuation. Origami actuator design problems possess inherent symmetries associated with the grid, mechanical boundary conditions and the objective function, which are often exploited to reduce the design space and computational cost of optimization. However, enforcing symmetry eliminates the prediction of potentially better performing asymmetric designs, which are more likely to exist given the discrete nature of fold line optimization. To better understand this effect, actuator design problems with different combinations of rotation and reflection symmetries were optimized while varying the number of folds allowed in the final design. In each case, the optimal origami patterns transitioned between symmetric and asymmetric solutions depended on the number of folds available for the design, with fewer symmetries present with more fold lines allowed. This study investigates the interplay of symmetry and discrete vs continuous optimization in origami actuators and provides insight into how the symmetries of the reference grid regulate the performance landscape. This work was supported by the Air Force Office of Scientific Research.
Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.
Design optimization of a portable, micro-hydrokinetic turbine
NASA Astrophysics Data System (ADS)
Schleicher, W. Chris
Marine and hydrokinetic (MHK) technology is a growing field that encompasses many different types of turbomachinery that operate on the kinetic energy of water. Micro hydrokinetics are a subset of MHK technology comprised of units designed to produce less than 100 kW of power. A propeller-type hydrokinetic turbine is investigated as a solution for a portable micro-hydrokinetic turbine with the needs of the United States Marine Corps in mind, as well as future commercial applications. This dissertation investigates using a response surface optimization methodology to create optimal turbine blade designs under many operating conditions. The field of hydrokinetics is introduced. The finite volume method is used to solve the Reynolds-Averaged Navier-Stokes equations with the k ω Shear Stress Transport model, for different propeller-type hydrokinetic turbines. The adaptive response surface optimization methodology is introduced as related to hydrokinetic turbines, and is benchmarked with complex algebraic functions. The optimization method is further studied to characterize the size of the experimental design on its ability to find optimum conditions. It was found that a large deviation between experimental design points was preferential. Different propeller hydrokinetic turbines were designed and compared with other forms of turbomachinery. It was found that the rapid simulations usually under predict performance compare to the refined simulations, and for some other designs it drastically over predicted performance. The optimization method was used to optimize a modular pump-turbine, verifying that the optimization work for other hydro turbine designs.
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
Optimal Design of Aortic Leaflet Prosthesis
NASA Technical Reports Server (NTRS)
Ghista, Dhanjoo N.; Reul, Helmut; Ray, Gautam; Chandran, K. B.
1978-01-01
The design criteria for an optimum prosthetic-aortic leaflet valve are a smooth washout in the valve cusps, minimal leaflet stress, minimal transmembrane pressure for the valve to open, an adequate lifetime (for a given blood-compatible leaflet material's fatigue data). A rigorous design analysis is presented to obtain the prosthetic tri-leaflet aortic valve leaflet's optimum design parameters. Four alternative optimum leaflet geometries are obtained to satisfy the criteria of a smooth washout and minimal leaflet stress. The leaflet thicknesses of these four optimum designs are determined by satisfying the two remaining design criteria for minimal transmembrane opening pressure and adequate fatigue lifetime, which are formulated in terms of the elastic and fatigue properties of the selected leaflet material - Avcothane-51 (of the Avco-Everett Co. of Massachusetts). Prosthetic valves are fabricated on the basis of the optimum analysis and the resulting detailed engineering drawings of the designs are also presented in the paper.
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework. PMID:26543899
Kafetzoglou, Stella; Aristomenopoulos, Giorgos; Papavassiliou, Symeon
2015-08-11
Among the key aspects of the Internet of Things (IoT) is the integration of heterogeneous sensors in a distributed system that performs actions on the physical world based on environmental information gathered by sensors and application-related constraints and requirements. Numerous applications of Wireless Sensor Networks (WSNs) have appeared in various fields, from environmental monitoring, to tactical fields, and healthcare at home, promising to change our quality of life and facilitating the vision of sensor network enabled smart cities. Given the enormous requirements that emerge in such a setting-both in terms of data and energy-data aggregation appears as a key element in reducing the amount of traffic in wireless sensor networks and achieving energy conservation. Probabilistic frameworks have been introduced as operational efficient and performance effective solutions for data aggregation in distributed sensor networks. In this work, we introduce an overall optimization approach that improves and complements such frameworks towards identifying the optimal probability for a node to aggregate packets as well as the optimal aggregation period that a node should wait for performing aggregation, so as to minimize the overall energy consumption, while satisfying certain imposed delay constraints. Primal dual decomposition is employed to solve the corresponding optimization problem while simulation results demonstrate the operational efficiency of the proposed approach under different traffic and topology scenarios.
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.
Total energy control system autopilot design with constrained parameter optimization
NASA Technical Reports Server (NTRS)
Ly, Uy-Loi; Voth, Christopher
1990-01-01
A description is given of the application of a multivariable control design method (SANDY) based on constrained parameter optimization to the design of a multiloop aircraft flight control system. Specifically, the design method is applied to the direct synthesis of a multiloop AFCS inner-loop feedback control system based on total energy control system (TECS) principles. The design procedure offers a structured approach for the determination of a set of stabilizing controller design gains that meet design specifications in closed-loop stability, command tracking performance, disturbance rejection, and limits on control activities. The approach can be extended to a broader class of multiloop flight control systems. Direct tradeoffs between many real design goals are rendered systematic by proper formulation of the design objectives and constraints. Satisfactory designs are usually obtained in few iterations. Performance characteristics of the optimized TECS design have been improved, particularly in the areas of closed-loop damping and control activity in the presence of turbulence.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Optimal adaptive sequential designs for crossover bioequivalence studies.
Xu, Jialin; Audet, Charles; DiLiberti, Charles E; Hauck, Walter W; Montague, Timothy H; Parr, Alan F; Potvin, Diane; Schuirmann, Donald J
2016-01-01
In prior works, this group demonstrated the feasibility of valid adaptive sequential designs for crossover bioequivalence studies. In this paper, we extend the prior work to optimize adaptive sequential designs over a range of geometric mean test/reference ratios (GMRs) of 70-143% within each of two ranges of intra-subject coefficient of variation (10-30% and 30-55%). These designs also introduce a futility decision for stopping the study after the first stage if there is sufficiently low likelihood of meeting bioequivalence criteria if the second stage were completed, as well as an upper limit on total study size. The optimized designs exhibited substantially improved performance characteristics over our previous adaptive sequential designs. Even though the optimized designs avoided undue inflation of type I error and maintained power at ≥ 80%, their average sample sizes were similar to or less than those of conventional single stage designs.
Optimal design of geodesically stiffened composite cylindrical shells
NASA Technical Reports Server (NTRS)
Gendron, G.; Gurdal, Z.
1992-01-01
An optimization system based on general-purpose finite element code CSM Testbed and optimization program ADS is described. The system can be used to obtain minimum-mass designs of composite shell structures with complex stiffening arrangements. Ply thicknesses, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a preliminary design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells, and ring and longitudinal stringer stiffened shells are also studied. Trends in the design of geodesically stiffened shells are identified. Features that enhance the capabilities and efficiency of the design system are described.
Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O
2009-06-01
Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.
Design optimization of system level adaptive optical performance
NASA Astrophysics Data System (ADS)
Michels, Gregory J.; Genberg, Victor L.; Doyle, Keith B.; Bisson, Gary R.
2005-09-01
By linking predictive methods from multiple engineering disciplines, engineers are able to compute more meaningful predictions of a product's performance. By coupling mechanical and optical predictive techniques mechanical design can be performed to optimize optical performance. This paper demonstrates how mechanical design optimization using system level optical performance can be used in the development of the design of a high precision adaptive optical telescope. While mechanical design parameters are treated as the design variables, the objective function is taken to be the adaptively corrected optical imaging performance of an orbiting two-mirror telescope.
Field Quality Optimization in a Common Coil Magnet Design
Gupta, R.; Ramberger, S.
1999-09-01
This paper presents the results of initial field quality optimization of body and end harmonics in a 'common coil magnet design'. It is shown that a good field quality, as required in accelerator magnets, can be obtained by distributing conductor blocks in such a way that they simulate an elliptical coil geometry. This strategy assures that the amount of conductor used in this block design is similar to that is used in a conventional cosine theta design. An optimized yoke that keeps all harmonics small over the entire range of operation using a single power supply is also presented. The field harmonics are primarily optimized with the computer program ROXIE.
NASA Astrophysics Data System (ADS)
Ibrahim, Ireen Munira; Liong, Choong-Yeun; Bakar, Sakhinah Abu; Ahmad, Norazura; Najmuddin, Ahmad Farid
2015-12-01
The Emergency Department (ED) is a very complex system with limited resources to support increase in demand. ED services are considered as good quality if they can meet the patient's expectation. Long waiting times and length of stay is always the main problem faced by the management. The management of ED should give greater emphasis on their capacity of resources in order to increase the quality of services, which conforms to patient satisfaction. This paper is a review of work in progress of a study being conducted in a government hospital in Selangor, Malaysia. This paper proposed a simulation optimization model framework which is used to study ED operations and problems as well as to find an optimal solution to the problems. The integration of simulation and optimization is hoped can assist management in decision making process regarding their resource capacity planning in order to improve current and future ED operations.
Optimal design of geodesically stiffened composite cylindrical shells
NASA Technical Reports Server (NTRS)
Gendron, G.; Guerdal, Z.
1992-01-01
An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.
Optimal design of one-dimensional photonic crystal filters using minimax optimization approach.
Hassan, Abdel-Karim S O; Mohamed, Ahmed S A; Maghrabi, Mahmoud M T; Rafat, Nadia H
2015-02-20
In this paper, we introduce a simulation-driven optimization approach for achieving the optimal design of electromagnetic wave (EMW) filters consisting of one-dimensional (1D) multilayer photonic crystal (PC) structures. The PC layers' thicknesses and/or material types are considered as designable parameters. The optimal design problem is formulated as a minimax optimization problem that is entirely solved by making use of readily available software tools. The proposed approach allows for the consideration of problems of higher dimension than usually treated before. In addition, it can proceed starting from bad initial design points. The validity, flexibility, and efficiency of the proposed approach is demonstrated by applying it to obtain the optimal design of two practical examples. The first is (SiC/Ag/SiO(2))(N) wide bandpass optical filter operating in the visible range. Contrarily, the second example is (Ag/SiO(2))(N) EMW low pass spectral filter, working in the infrared range, which is used for enhancing the efficiency of thermophotovoltaic systems. The approach shows a good ability to converge to the optimal solution, for different design specifications, regardless of the starting design point. This ensures that the approach is robust and general enough to be applied for obtaining the optimal design of all 1D photonic crystals promising applications.
Spreadsheet Design: An Optimal Checklist for Accountants
ERIC Educational Resources Information Center
Barnes, Jeffrey N.; Tufte, David; Christensen, David
2009-01-01
Just as good grammar, punctuation, style, and content organization are important to well-written documents, basic fundamentals of spreadsheet design are essential to clear communication. In fact, the very principles of good writing should be integrated into spreadsheet workpaper design and organization. The unique contributions of this paper are…
Computational Model Optimization for Enzyme Design Applications
2007-11-02
naturally occurring E. coli chorismate mutase (EcCM) enzyme through computational design. Although the stated milestone of creating a novel... chorismate mutase (CM) was not achieved, the enhancement of the underlying computational model through the development of the two-body PB method will facilitate the future design of novel protein catalysts.
Optimal design of spatial distribution networks
NASA Astrophysics Data System (ADS)
Gastner, Michael T.; Newman, M. E. J.
2006-07-01
We consider the problem of constructing facilities such as hospitals, airports, or malls in a country with a nonuniform population density, such that the average distance from a person’s home to the nearest facility is minimized. We review some previous approximate treatments of this problem that indicate that the optimal distribution of facilities should have a density that increases with population density, but does so slower than linearly, as the two-thirds power. We confirm this result numerically for the particular case of the United States with recent population data using two independent methods, one a straightforward regression analysis, the other based on density-dependent map projections. We also consider strategies for linking the facilities to form a spatial network, such as a network of flights between airports, so that the combined cost of maintenance of and travel on the network is minimized. We show specific examples of such optimal networks for the case of the United States.
New approaches to the design optimization of hydrofoils
NASA Astrophysics Data System (ADS)
Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas
2015-11-01
Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.
Optimization applications in aircraft engine design and test
NASA Technical Reports Server (NTRS)
Pratt, T. K.
1984-01-01
Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.
A study of commuter airplane design optimization
NASA Technical Reports Server (NTRS)
Keppel, B. V.; Eysink, H.; Hammer, J.; Hawley, K.; Meredith, P.; Roskam, J.
1978-01-01
The usability of the general aviation synthesis program (GASP) was enhanced by the development of separate computer subroutines which can be added as a package to this assembly of computerized design methods or used as a separate subroutine program to compute the dynamic longitudinal, lateral-directional stability characteristics for a given airplane. Currently available analysis methods were evaluated to ascertain those most appropriate for the design functions which the GASP computerized design program performs. Methods for providing proper constraint and/or analysis functions for GASP were developed as well as the appropriate subroutines.
Sequential ensemble-based optimal design for parameter estimation
NASA Astrophysics Data System (ADS)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Clean wing airframe noise modeling for multidisciplinary design and optimization
NASA Astrophysics Data System (ADS)
Hosder, Serhat
transport wing at realistic approach conditions. The twist distribution of the baseline wing was changed to obtain a modified wing which was used to investigate the effect of the twist on the trailing edge noise. An example study with NACA 0012 and NACA 0009 airfoils demonstrated a reduction in the trailing edge noise by decreasing the thickness ratio and the lift coefficient, while increasing the chord length to keep the same lift at a constant speed. Both two- and three-dimensional studies demonstrated that the trailing edge noise remains almost constant at low lift coefficients and gets larger at higher lift values. The increase in the noise metric can be dramatic when there is separation on the wing. Three-dimensional effects observed in the wing cases indicate the importance of calculating the noise metric with a characteristic velocity and length scale that vary along the span. The twist change does not have a significant effect on the noise at low lift coefficients, however it may give significant noise reduction at higher lift values. The results obtained in this study show the importance of the lift coefficient, CL, on the airframe noise of a clean wing and favors having a larger wing area to reduce the CL for minimizing the noise. The results also point to the fact that the noise reduction studies should be performed in a multidisciplinary design and optimization framework, since many of the parameters that change the trailing edge noise also affect the other aircraft design requirements. It's hoped that the noise metric developed here can aid in such multidisciplinary design and optimization studies.
An algorithm for optimal structural design with frequency constraints
NASA Technical Reports Server (NTRS)
Kiusalaas, J.; Shaw, R. C. J.
1978-01-01
The paper presents a finite element method for minimum weight design of structures with lower-bound constraints on the natural frequencies, and upper and lower bounds on the design variables. The design algorithm is essentially an iterative solution of the Kuhn-Tucker optimality criterion. The three most important features of the algorithm are: (1) a small number of design iterations are needed to reach optimal or near-optimal design, (2) structural elements with a wide variety of size-stiffness may be used, the only significant restriction being the exclusion of curved beam and shell elements, and (3) the algorithm will work for multiple as well as single frequency constraints. The design procedure is illustrated with three simple problems.
An interactive system for aircraft design and optimization
NASA Technical Reports Server (NTRS)
Kroo, Ilan M.
1992-01-01
A system for aircraft design utilizing a unique analysis architecture, graphical interface, and suite of numerical optimization methods is described in this paper. The non-procedural architecture provides extensibility and efficiency not possible with conventional programming techniques. The interface for analysis and optimization, developed for use with this method, is described and its application to example problems is discussed.
Optimizing Measurement Designs with Budget Constraints: The Variable Cost Case.
ERIC Educational Resources Information Center
Marcoulides, George A.
1997-01-01
Presents a procedure for determining the optimal number of conditions to use in multifaceted measurement designs when resource constraints are imposed. The procedure is illustrated for the case in which the costs per condition vary within the same facet. (Author)
Optimal design against collapse after buckling. [of beams
NASA Technical Reports Server (NTRS)
Masur, E. F.
1976-01-01
After buckling, statically indeterminate trusses, beams, and other strictly symmetric structures may collapse under loads which reach limiting magnitudes. Optimal design is discussed for prescribed values of these collapse loads.
NASA Astrophysics Data System (ADS)
Payne, Joshua; Knoll, Dana; McPherson, Allen; Taitano, William; Chacon, Luis; Chen, Guangye; Pakin, Scott
2013-10-01
As computer architectures become increasingly heterogeneous the need for algorithms and applications that can utilize these new architectures grows more pressing. CoCoPIC is a fully implicit charge and energy conserving particle-in-cell framework developed as part of the Computational Co-Design for Multi-Scale Applications in the Natural Sciences (CoCoMANS) project at Los Alamos National Laboratory. CoCoMANS is a multi-disciplinary computational co-design effort with the goal of developing new algorithms for emerging architectures using multi-scale applications. This poster will present the co-design process evolved within CoCoMANS, and details regarding the design and development of multi-architecture framework for a plasma application. This framework utilizes multiple abstraction layers in order to maximize code reuse between architectures, while providing low level abstractions to incorporate architecture specific operation optimizations such as vectorizations or hardware fused multiply-add. CoCoPIC's target problems include 1D3V slow shocks, and 2D3V magnetic island coalescence. Results of the multi-core development and optimization process will be presented.
Precision of Sensitivity in the Design Optimization of Indeterminate Structures
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.
2006-01-01
Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.
ERIC Educational Resources Information Center
Bozkurt, Ipek; Helm, James
2013-01-01
This paper develops a systems engineering-based framework to assist in the design of an online engineering course. Specifically, the purpose of the framework is to provide a structured methodology for the design, development and delivery of a fully online course, either brand new or modified from an existing face-to-face course. The main strength…
Minimum weight design of structures via optimality criteria
NASA Technical Reports Server (NTRS)
Kiusalaas, J.
1972-01-01
The state of the art of automated structural design through the use of optimality criteria, with emphasis on aerospace applications is reviewed. Constraints on stresses, displacements, and buckling strengths under static loading, as well as lower bound limits on natural frequencies and flutter speeds are presented. It is presumed that the reader is experienced in finite element methods of analysis, but is not familiar with optimal design techniques.
Process Model Construction and Optimization Using Statistical Experimental Design,
1988-04-01
Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George
Quadratic performance index generation for optimal regular design.
NASA Technical Reports Server (NTRS)
Bullock, T. E.; Elder, J. M.
1971-01-01
Application of optimal control theory to practical problems has been limited by the difficulty of prescribing a performance index which accurately reflects design requirements. The task of deriving equivalent performance indices is considered in the present paper for a plant that is a completely controllable, scalar linear system with state feedback. A quadratic index is developed which leads to an optimal design performance satisfying some of the classical performance criteria.
Cai, Jiandong; Wang, Michael Yu; Zhang, Li
2015-12-01
In multifrequency atomic force microscopy (AFM), probe's characteristic of assigning resonance frequencies to integer harmonics results in a remarkable improvement of detection sensitivity at specific harmonic components. The selection criterion of harmonic order is based on its amplitude's sensitivity on material properties, e.g., elasticity. Previous studies on designing harmonic probe are unable to provide a large design capability along with maintaining the structural integrity. Herein, we propose a harmonic probe with step cross section, in which it has variable width in top and bottom steps, while the middle step in cross section is kept constant. Higher order resonance frequencies are tailored to be integer times of fundamental resonance frequency. The probe design is implemented within a structural optimization framework. The optimally designed probe is micromachined using focused ion beam milling technique, and then measured with an AFM. The measurement results agree well with our resonance frequency assignment requirement.
Cai, Jiandong; Zhang, Li; Wang, Michael Yu
2015-12-15
In multifrequency atomic force microscopy (AFM), probe’s characteristic of assigning resonance frequencies to integer harmonics results in a remarkable improvement of detection sensitivity at specific harmonic components. The selection criterion of harmonic order is based on its amplitude’s sensitivity on material properties, e.g., elasticity. Previous studies on designing harmonic probe are unable to provide a large design capability along with maintaining the structural integrity. Herein, we propose a harmonic probe with step cross section, in which it has variable width in top and bottom steps, while the middle step in cross section is kept constant. Higher order resonance frequencies are tailored to be integer times of fundamental resonance frequency. The probe design is implemented within a structural optimization framework. The optimally designed probe is micromachined using focused ion beam milling technique, and then measured with an AFM. The measurement results agree well with our resonance frequency assignment requirement.
Multi-objective optimal design of lithium-ion battery packs based on evolutionary algorithms
NASA Astrophysics Data System (ADS)
Severino, Bernardo; Gana, Felipe; Palma-Behnke, Rodrigo; Estévez, Pablo A.; Calderón-Muñoz, Williams R.; Orchard, Marcos E.; Reyes, Jorge; Cortés, Marcelo
2014-12-01
Lithium-battery energy storage systems (LiBESS) are increasingly being used on electric mobility and stationary applications. Despite its increasing use and improvements of the technology there are still challenges associated with cost reduction, increasing lifetime and capacity, and higher safety. A correct battery thermal management system (BTMS) design is critical to achieve these goals. In this paper, a general framework for obtaining optimal BTMS designs is proposed. Due to the trade-off between the BTMS's design goals and the complex modeling of thermal response inside the battery pack, this paper proposes to solve this problem using a novel Multi-Objective Particle Swarm Optimization (MOPSO) approach. A theoretical case of a module with 6 cells and a real case of a pack used in a Solar Race Car are presented. The results show the capabilities of the proposal methodology, in which improved designs for battery packs are obtained.
LQR-Based Optimal Distributed Cooperative Design for Linear Discrete-Time Multiagent Systems.
Zhang, Huaguang; Feng, Tao; Liang, Hongjing; Luo, Yanhong
2017-03-01
In this paper, a novel linear quadratic regulator (LQR)-based optimal distributed cooperative design method is developed for synchronization control of general linear discrete-time multiagent systems on a fixed, directed graph. Sufficient conditions are derived for synchronization, which restrict the graph eigenvalues into a bounded circular region in the complex plane. The synchronizing speed issue is also considered, and it turns out that the synchronizing region reduces as the synchronizing speed becomes faster. To obtain more desirable synchronizing capacity, the weighting matrices are selected by sufficiently utilizing the guaranteed gain margin of the optimal regulators. Based on the developed LQR-based cooperative design framework, an approximate dynamic programming technique is successfully introduced to overcome the (partially or completely) model-free cooperative design for linear multiagent systems. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design methods.
a Novel Framework for Incorporating Sustainability Into Biomass Feedstock Design
NASA Astrophysics Data System (ADS)
Gopalakrishnan, G.; Negri, C.
2012-12-01
There is a strong society need to evaluate and understand the sustainability of biofuels, especially due to the significant increases in production mandated by many countries, including the United States. Biomass feedstock production is an important contributor to environmental, social and economic impacts from biofuels. We present a systems approach where the agricultural, urban, energy and environmental sectors are considered as components of a single system and environmental liabilities are used as recoverable resources for biomass feedstock production. A geospatial analysis evaluating marginal land and degraded water resources to improve feedstock productivity with concomitant environmental restoration was conducted for the major corn producing states in the US. The extent and availability of these resources was assessed and geospatial techniques used to identify promising opportunities to implement this approach. Utilizing different sources of marginal land (roadway buffers, contaminated land) could result in a 7-fold increase in land availability for feedstock production and provide ecosystem services such as water quality improvement and carbon sequestration. Spatial overlap between degraded water and marginal land resources was found to be as high as 98% and could maintain sustainable feedstock production on marginal lands through the supply of water and nutrients. Multi-objective optimization was used to quantify the tradeoffs between net revenue, improvements in water quality and carbon sequestration at the farm scale using this design. Results indicated that there is an initial opportunity where land that is marginally productive for row crops and of marginal value for conservation purposes could be used to grow bioenergy crops such that that water quality and carbon sequestration benefits are obtained.
Evolutionary Technique for Designing Optimized Arrays
NASA Astrophysics Data System (ADS)
Villazón, J.; Ibañez, A.
2011-06-01
Many ultrasonic inspection applications in the industry could benefit from the use of phased array distributions specifically designed for them. Some common design requirements are: to adapt the shape of the array to that of the part to be inspected, to use large apertures for increasing lateral resolution, to find a layout of elements that avoids artifacts produced by lateral and/or grating lobes, to maintain the total number of independent elements (and the number of control channels) as low as possible to reduce complexity and cost of the inspection system. Recent advances in transducer technology have made possible to design and build arrays whit non-regular layout of elements. In this paper we propose to use Evolutionary Algorithms to find layouts of ultrasonic arrays (whether 1D or 2D array) that approach a set of specified beampattern characteristics using a low number of elements.
Detecting glaucomatous change in visual fields: Analysis with an optimization framework.
Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2015-12-01
Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants.
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.
Integrated design optimization research and development in an industrial environment
NASA Technical Reports Server (NTRS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-01-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
Application of optimization techniques to vehicle design: A review
NASA Technical Reports Server (NTRS)
Prasad, B.; Magee, C. L.
1984-01-01
The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.
Unified Simulation and Analysis Framework for Deep Space Navigation Design
NASA Technical Reports Server (NTRS)
Anzalone, Evan; Chuang, Jason; Olsen, Carrie
2013-01-01
As the technology that enables advanced deep space autonomous navigation continues to develop and the requirements for such capability continues to grow, there is a clear need for a modular expandable simulation framework. This tool's purpose is to address multiple measurement and information sources in order to capture system capability. This is needed to analyze the capability of competing navigation systems as well as to develop system requirements, in order to determine its effect on the sizing of the integrated vehicle. The development for such a framework is built upon Model-Based Systems Engineering techniques to capture the architecture of the navigation system and possible state measurements and observations to feed into the simulation implementation structure. These models also allow a common environment for the capture of an increasingly complex operational architecture, involving multiple spacecraft, ground stations, and communication networks. In order to address these architectural developments, a framework of agent-based modules is implemented to capture the independent operations of individual spacecraft as well as the network interactions amongst spacecraft. This paper describes the development of this framework, and the modeling processes used to capture a deep space navigation system. Additionally, a sample implementation describing a concept of network-based navigation utilizing digitally transmitted data packets is described in detail. This developed package shows the capability of the modeling framework, including its modularity, analysis capabilities, and its unification back to the overall system requirements and definition.
NASA Astrophysics Data System (ADS)
Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe
2016-12-01
A generalization of the optimization framework typically used in moving least squares is presented that provides high-order approximation while maintaining compact stencils and a consistent treatment of boundaries. The approach, which we refer to as compact moving least squares, resembles the capabilities of compact finite differences but requires no structure in the underlying set of nodes. An efficient collocation scheme is used to demonstrate the capabilities of the method to solve elliptic boundary value problems in strong form stably without the need for an expensive weak form. The flexibility of the approach is demonstrated by using the same framework to both solve a variety of elliptic problems and to generate implicit approximations to derivatives. Finally, an efficient preconditioner is presented for the steady Stokes equations, and the approach's efficiency and high order of accuracy is demonstrated for domains with curvi-linear boundaries.
Optimally designed quantum transport across disordered networks.
Walschaers, Mattia; Diaz, Jorge Fernandez-de-Cossio; Mulet, Roberto; Buchleitner, Andreas
2013-11-01
We establish a general mechanism for highly efficient quantum transport through finite, disordered 3D networks. It relies on the interplay of disorder with centrosymmetry and a dominant doublet spectral structure and can be controlled by the proper tuning of only coarse-grained quantities. Photosynthetic light harvesting complexes are discussed as potential biological incarnations of this design principle.
Optimal Experimental Design for Model Discrimination
ERIC Educational Resources Information Center
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…
ERIC Educational Resources Information Center
Gynther, Karsten
2016-01-01
The research project has developed a design framework for an adaptive MOOC that complements the MOOC format with blended learning. The design framework consists of a design model and a series of learning design principles which can be used to design in-service courses for teacher professional development. The framework has been evaluated by…
An Optimal Design Problem for Submerged Bodies,
1984-01-01
problem, according to the relati, n (1.7) by (1.10) u(p) = j Y(p,q)g(q)drq- j w(q) a( drq, pEDf q n- q’ r r and, again using the jump relations one sees...by Uad: (a) Af(p) = 0, pEDf , (b) + ko = 0 on y=0 (2.3) (C) Tn- = 0 on y=-h (d) n- g on F(f) (e) 7’P C.e) - - ik 04 = o(o...surface fEUad gives rise, according to Theorem 1.1, to a potential 0=0(p;f), pEDf . The class of optimization problems that we discuss below then have the
Optimal design of a touch trigger probe
NASA Astrophysics Data System (ADS)
Li, Rui-Jun; Xiang, Meng; Fan, Kuang-Chao; Zhou, Hao; Feng, Jian
2015-02-01
A tungsten stylus with a ruby ball tip was screwed into a floating plate, which was supported by four leaf springs. The displacement of the tip caused by the contact force in 3D could be transferred into the tilt or vertical displacement of a plane mirror mounted on the floating plate. A quadrant photo detector (QPD) based two dimensional angle sensor was used to detect the tilt or the vertical displacement of the plane mirror. The structural parameters of the probe are optimized for equal sensitivity and equal stiffness in a displacement range of +/-5 μm, and a restricted horizontal size of less than 40 mm. Simulation results indicated that the stiffness was less than 0.6 mN/μm and equal in 3D. Experimental results indicated that the probe could be used to achieve a resolution of 1 nm.
Ceramic processing: Experimental design and optimization
NASA Technical Reports Server (NTRS)
Weiser, Martin W.; Lauben, David N.; Madrid, Philip
1992-01-01
The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.
A predictive machine learning approach for microstructure optimization and materials design.
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; Agrawal, Ankit; Sundararaghavan, Veera; Choudhary, Alok
2015-06-23
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. Experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.
A predictive machine learning approach for microstructure optimization and materials design
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; ...
2015-06-23
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniquenessmore » of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. In conclusion, experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.« less
A predictive machine learning approach for microstructure optimization and materials design
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; Agrawal, Ankit; Sundararaghavan, Veera; Choudhary, Alok
2015-06-23
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. In conclusion, experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.
Intelligent Space Tube Optimization for speeding ground water remedial design.
Kalwij, Ineke M; Peralta, Richard C
2008-01-01
An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The
Optimal brushless DC motor design using genetic algorithms
NASA Astrophysics Data System (ADS)
Rahideh, A.; Korakianitis, T.; Ruiz, P.; Keeble, T.; Rothman, M. T.
2010-11-01
This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using a genetic algorithm. Characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. Electrical and mechanical requirements (i.e. voltage, torque and speed) and other limitations (e.g. upper and lower limits of the motor geometries) are cast into constraints of the optimization problem. One sample case is used to illustrate the design and optimization technique.
Optimal aeroelastic design of an oblique wing structure
NASA Technical Reports Server (NTRS)
Gwin, L. B.
1974-01-01
A procedure is presented for determining the optimal cover panel thickness of a wing structure to meet specified strength and static aeroelastic divergence requirements for minimum weight. Efficient reanalysis techniques using discrete structural and aerodynamic methods are used in conjunction with redesign algorithms driven by optimality criteria. The optimality conditions for the divergence constraint are established, and expressions are obtained for derivatives of the dynamic pressure at divergence with respect to design variables. The procedure is applied to an oblique wing aircraft where strength and stiffness are critical design considerations for sizing the cover thickness of the wing structure.
Application of clustering global optimization to thin film design problems.
Lemarchand, Fabien
2014-03-10
Refinement techniques usually calculate an optimized local solution, which is strongly dependent on the initial formula used for the thin film design. In the present study, a clustering global optimization method is used which can iteratively change this initial formula, thereby progressing further than in the case of local optimization techniques. A wide panel of local solutions is found using this procedure, resulting in a large range of optical thicknesses. The efficiency of this technique is illustrated by two thin film design problems, in particular an infrared antireflection coating, and a solar-selective absorber coating.
System Design Support by Optimization Method Using Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
We proposed the new optimization method based on stochastic process. The characteristics of this method are to obtain the approximate solution of the optimum solution as an expected value. In numerical calculation, a kind of Monte Carlo method is used to obtain the solution because of stochastic process. Then, it can obtain the probability distribution of the design variable because it is generated in the probability that design variables were in proportion to the evaluation function value. This probability distribution shows the influence of design variables on the evaluation function value. This probability distribution is the information which is very useful for the system design. In this paper, it is shown the proposed method is useful for not only the optimization but also the system design. The flight trajectory optimization problem for the hang-glider is shown as an example of the numerical calculation.
Optimal design of a pilot OTEC power plant in Taiwan
Tseng, C.H.; Kao, K.Y. ); Yang, J.C. )
1991-12-01
In this paper, an optimal design concept has been utilized to find the best designs for a complex and large-scale ocean thermal energy conversion (OTEC) plant. THe OTEC power plant under this study is divided into three major subsystems consisting of power subsystem, seawater pipe subsystem, and containment subsystem. The design optimization model for the entire OTEC plant is integrated from these sub-systems under the considerations of their own various design criteria and constraints. The mathematical formulations of this optimization model for the entire OTEC plant are described. The design variables, objective function, and constraints for a pilot plant under the constraints of the feasible technologies at this stage in Taiwan have been carefully examined and selected.
Formulation for Simultaneous Aerodynamic Analysis and Design Optimization
NASA Technical Reports Server (NTRS)
Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.
1993-01-01
An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.
NASA Astrophysics Data System (ADS)
Peña-Haro, Salvador; Pulido-Velazquez, Manuel; Sahuquillo, Andrés
2009-06-01
SummaryA hydro-economic modelling framework is developed for determining optimal management of groundwater nitrate pollution from agriculture. A holistic optimization model determines the spatial and temporal fertilizer application rate that maximizes the net benefits in agriculture constrained by the quality requirements in groundwater at various control sites. Since emissions (nitrogen loading rates) are what can be controlled, but the concentrations are the policy targets, we need to relate both. Agronomic simulations are used to obtain the nitrate leached, while numerical groundwater flow and solute transport simulation models were used to develop unit source solutions that were assembled into a pollutant concentration response matrix. The integration of the response matrix in the constraints of the management model allows simulating by superposition the evolution of groundwater nitrate concentration over time at different points of interest throughout the aquifer resulting from multiple pollutant sources distributed over time and space. In this way, the modelling framework relates the fertilizer loads with the nitrate concentration at the control sites. The benefits in agriculture were determined through crop prices and crop production functions. This research aims to contribute to the ongoing policy process in the Europe Union (the Water Framework Directive) providing a tool for analyzing the opportunity cost of measures for reducing nitrogen loadings and assessing their effectiveness for maintaining groundwater nitrate concentration within the target levels. The management model was applied to a hypothetical groundwater system. Optimal solutions of fertilizer use to problems with different initial conditions, planning horizons, and recovery times were determined. The illustrative example shows the importance of the location of the pollution sources in relation to the control sites, and how both the selected planning horizon and the target recovery time can
Basu, Sanjay; Kiernan, Michaela
2015-01-01
Introduction While increasingly popular among mid- to large-size employers, using financial incentives to induce health behavior change among employees has been controversial, in part due to poor quality and generalizability of studies to date. Thus, fundamental questions have been left unanswered: to generate positive economic returns on investment, what level of incentive should be offered for any given type of incentive program and among which employees? Methods We constructed a novel modeling framework that systematically identifies how to optimize marginal return on investment from programs incentivizing behavior change by integrating commonly-collected data on health behaviors and associated costs. We integrated “demand curves” capturing individual differences in response to any given incentive with employee demographic and risk factor data. We also estimated the degree of self-selection that could be tolerated, i.e., the maximum percentage of already-healthy employees who could enroll in a wellness program while still maintaining positive absolute return on investment. In a demonstration analysis, the modeling framework was applied to data from 3,000 worksite physical activity programs across the nation. Results For physical activity programs, the incentive levels that would optimize marginal return on investment ($367/employee/year) were higher than average incentive levels currently offered ($143/employee/year). Yet a high degree of self-selection could undermine the economic benefits of the program; if more than 17% of participants came from the top 10% of the physical activity distribution, the cost of the program would be expected to always be greater than its benefits. Discussion Our generalizable framework integrates individual differences in behavior and risk to systematically estimate the incentive level that optimizes marginal return on investment. PMID:25977362
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
Optimal design of multi-arm multi-stage trials.
Wason, James M S; Jaki, Thomas
2012-12-30
In drug development, there is often uncertainty about the most promising among a set of different treatments. Multi-arm multi-stage (MAMS) trials provide large gains in efficiency over separate randomised trials of each treatment. They allow a shared control group, dropping of ineffective treatments before the end of the trial and stopping the trial early if sufficient evidence of a treatment being superior to control is found. In this paper, we discuss optimal design of MAMS trials. An optimal design has the required type I error rate and power but minimises the expected sample size at some set of treatment effects. Finding an optimal design requires searching over stopping boundaries and sample size, potentially a large number of parameters. We propose a method that combines quick evaluation of specific designs and an efficient stochastic search to find the optimal design parameters. We compare various potential designs motivated by the design of a phase II MAMS trial. We also consider allocating more patients to the control group, as has been carried out in real MAMS studies. We show that the optimal allocation to the control group, although greater than a 1:1 ratio, is smaller than previously advocated and that the gain in efficiency is generally small.
Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling
NASA Astrophysics Data System (ADS)
Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.
2016-11-01
A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is
Overview: Applications of numerical optimization methods to helicopter design problems
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
There are a number of helicopter design problems that are well suited to applications of numerical design optimization techniques. Adequate implementation of this technology will provide high pay-offs. There are a number of numerical optimization programs available, and there are many excellent response/performance analysis programs developed or being developed. But integration of these programs in a form that is usable in the design phase should be recognized as important. It is also necessary to attract the attention of engineers engaged in the development of analysis capabilities and to make them aware that analysis capabilities are much more powerful if integrated into design oriented codes. Frequently, the shortcoming of analysis capabilities are revealed by coupling them with an optimization code. Most of the published work has addressed problems in preliminary system design, rotor system/blade design or airframe design. Very few published results were found in acoustics, aerodynamics and control system design. Currently major efforts are focused on vibration reduction, and aerodynamics/acoustics applications appear to be growing fast. The development of a computer program system to integrate the multiple disciplines required in helicopter design with numerical optimization technique is needed. Activities in Britain, Germany and Poland are identified, but no published results from France, Italy, the USSR or Japan were found.
SPEED design optimization via Fresnel propagation analysis
NASA Astrophysics Data System (ADS)
Beaulieu, Mathilde; Abe, Lyu; Martinez, Patrice; Gouvret, Carole; Dejonghe, Julien; Preis, Oliver; Vakili, Farrokh
2016-08-01
Future extremely large telescopes will open a niche for exoplanet direct imaging at the expense of using a primary segmented mirror which is known to hamper high-contrast imaging capabilities. The focal plane diffraction pattern is dominated by bright structures and the way to reduce them is not straightforward since one has to deal with strong amplitude discontinuities in this kind of unfriendly pupil (segment gaps and secondary support). The SPEED experiment developed at Lagrange laboratory is designed to address this specific topic along with high-contrast at very small separation. The baseline design of SPEED will combine a coronagraph and two deformable mirrors to create dark zones at the focal plane. A first step in this project was to identify under which circumstances the deep contrast at small separation is achievable. In particular, the DMs location is among the critical aspect to consider and is the topic covered by this paper.
2013-06-01
AD_________________ Award Number: W81XWH-11-2-0133 TITLE: Framework for Smart Electronic Health...NUMBER Framework for Smart Electronic Health Record-Linked Predictive Models to Optimize Care for Complex Digestive Diseases 5b. GRANT NUMBER...an intelligent workspace , by displaying annotation forms and de-identified reports with the same view, automatic report queuing and providing easy
Kornelakis, Aris
2010-12-15
Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Value (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)
Finite element based electric motor design optimization
NASA Technical Reports Server (NTRS)
Campbell, C. Warren
1993-01-01
The purpose of this effort was to develop a finite element code for the analysis and design of permanent magnet electric motors. These motors would drive electromechanical actuators in advanced rocket engines. The actuators would control fuel valves and thrust vector control systems. Refurbishing the hydraulic systems of the Space Shuttle after each flight is costly and time consuming. Electromechanical actuators could replace hydraulics, improve system reliability, and reduce down time.
Optimal structural design via optimality criteria as a nonsmooth mechanics problem
NASA Astrophysics Data System (ADS)
Tzaferopoulos, M. Ap.; Stravroulakis, G. E.
1995-06-01
In the theory of plastic structural design via optimality criteria (due to W. Prager), the optimal design problem is transformed to a nonlinear elastic structural analysis problem with appropriate stress-strain laws, which generally include complete vertical branches. In this context, the concept of structural universe (in the sense of G. Rozvany) permits the treatment of complicated optimal layout problems. Recent progress in the field of nonsmooth mechanics makes the solution of structural analysis problems with this kind of 'complete' law possible. Elements from the two fields are combined in this paper for the solution of optimal design and layout problems for structures. The optimal layout of plane trusses with various specific cost functions is studied here as a representative problem. The use of convex, continuous and piecewise linear specific cost functions for the structural members leads to problems of linear variational inequalities or equivalently piecewise linear, convex but nonsmooth optimization problems, which are solved by means of an iterative algorithm based on sequential linear programming techniques. Numerical examples illustrate the theory and its applicability to practical engineering structures. Following a parametric investigation of an optimal bridge design, certain aspects of the optimal truss layout problem are discussed, which can be extended to other types of structural systems as well.
Geometry Modeling and Grid Generation for Design and Optimization
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.
On Optimal Input Design and Model Selection for Communication Channels
Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M
2013-01-01
In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.
Tabu search method with random moves for globally optimal design
NASA Astrophysics Data System (ADS)
Hu, Nanfang
1992-09-01
Optimum engineering design problems are usually formulated as non-convex optimization problems of continuous variables. Because of the absence of convexity structure, they can have multiple minima, and global optimization becomes difficult. Traditional methods of optimization, such as penalty methods, can often be trapped at a local optimum. The tabu search method with random moves to solve approximately these problems is introduced. Its reliability and efficiency are examined with the help of standard test functions. By the analysis of the implementations, it is seen that this method is easy to use, and no derivative information is necessary. It outperforms the random search method and composite genetic algorithm. In particular, it is applied to minimum weight design examples of a three-bar truss, coil springs, a Z-section and a channel section. For the channel section, the optimal design using the tabu search method with random moves saved 26.14 percent over the weight of the SUMT method.
Global nonlinear optimization of spacecraft protective structures design
NASA Technical Reports Server (NTRS)
Mog, R. A.; Lovett, J. N., Jr.; Avans, S. L.
1990-01-01
The global optimization of protective structural designs for spacecraft subject to hypervelocity meteoroid and space debris impacts is presented. This nonlinear problem is first formulated for weight minimization of the space station core module configuration using the Nysmith impact predictor. Next, the equivalence and uniqueness of local and global optima is shown using properties of convexity. This analysis results in a new feasibility condition for this problem. The solution existence is then shown, followed by a comparison of optimization techniques. Finally, a sensitivity analysis is presented to determine the effects of variations in the systemic parameters on optimal design. The results show that global optimization of this problem is unique and may be achieved by a number of methods, provided the feasibility condition is satisfied. Furthermore, module structural design thicknesses and weight increase with increasing projectile velocity and diameter and decrease with increasing separation between bumper and wall for the Nysmith predictor.
Improved method for transonic airfoil design-by-optimization
NASA Technical Reports Server (NTRS)
Kennelly, R. A., Jr.
1983-01-01
An improved method for use of optimization techniques in transonic airfoil design is demonstrated. FLO6QNM incorporates a modified quasi-Newton optimization package, and is shown to be more reliable and efficient than the method developed previously at NASA-Ames, which used the COPES/CONMIN optimization program. The design codes are compared on a series of test cases with known solutions, and the effects of problem scaling, proximity of initial point to solution, and objective function precision are studied. In contrast to the older method, well-converged solutions are shown to be attainable in the context of engineering design using computational fluid dynamics tools, a new result. The improvements are due to better performance by the optimization routine and to the use of problem-adaptive finite difference step sizes for gradient evaluation.
Fuel Injector Design Optimization for an Annular Scramjet Geometry
NASA Technical Reports Server (NTRS)
Steffen, Christopher J., Jr.
2003-01-01
A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.
Designing and optimizing a healthcare kiosk for the community.
Lyu, Yongqiang; Vincent, Christopher James; Chen, Yu; Shi, Yuanchun; Tang, Yida; Wang, Wenyao; Liu, Wei; Zhang, Shuangshuang; Fang, Ke; Ding, Ji
2015-03-01
Investigating new ways to deliver care, such as the use of self-service kiosks to collect and monitor signs of wellness, supports healthcare efficiency and inclusivity. Self-service kiosks offer this potential, but there is a need for solutions to meet acceptable standards, e.g. provision of accurate measurements. This study investigates the design and optimization of a prototype healthcare kiosk to collect vital signs measures. The design problem was decomposed, formalized, focused and used to generate multiple solutions. Systematic implementation and evaluation allowed for the optimization of measurement accuracy, first for individuals and then for a population. The optimized solution was tested independently to check the suitability of the methods, and quality of the solution. The process resulted in a reduction of measurement noise and an optimal fit, in terms of the positioning of measurement devices. This guaranteed the accuracy of the solution and provides a general methodology for similar design problems.
Role of Design Standards in Wind Plant Optimization (Presentation)
Veers, P.; Churchfield, M.; Lee, S.; Moon, J.; Larsen, G.
2013-10-01
When a turbine is optimized, it is done within the design constraints established by the objective criteria in the international design standards used to certify a design. Since these criteria are multifaceted, it is a challenging task to conduct the optimization, but it can be done. The optimization is facilitated by the fact that a standard turbine model is subjected to standard inflow conditions that are well characterized in the standard. Examples of applying these conditions to rotor optimization are examined. In other cases, an innovation may provide substantial improvement in one area, but be challenged to impact all of the myriad design load cases. When a turbine is placed in a wind plant, the challenge is magnified. Typical design practice optimizes the turbine for stand-alone operation, and then runs a check on the actual site conditions, including wakes from all nearby turbines. Thus, each turbine in a plant has unique inflow conditions. The possibility of creating objective and consistent inflow conditions for turbines within a plant, for used in optimization of the turbine and the plant, are examined with examples taken from LES simulation.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H
2016-07-01
Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction.
Computer Language For Optimization Of Design
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.; Lucas, Stephen H.
1991-01-01
SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.
An optimal trajectory design for debris deorbiting
NASA Astrophysics Data System (ADS)
Ouyang, Gaoxiang; Dong, Xin; Li, Xin; Zhang, Yang
2016-01-01
The problem of deorbiting debris is studied in this paper. As a feasible measure, a disposable satellite would be launched, attach to debris, and deorbit the space debris using a technology named electrodynamic tether (EDT). In order to deorbit multiple debris as many as possible, a suboptimal but feasible and efficient trajectory set has been designed to allow a deorbiter satellite tour the LEO small bodies per one mission. Finally a simulation given by this paper showed that a 600 kg satellite is capable of deorbiting 6 debris objects in about 230 days.
Optimal design of distributed wastewater treatment networks
Galan, B.; Grossmann, I.E.
1998-10-01
This paper deals with the optimum design of a distributed wastewater network where multicomponent streams are considered that are to be processed by units for reducing the concentration of several contaminants. The proposed model gives rise to a nonconvex nonlinear problem which often exhibits local minima and causes convergence difficulties. A search procedure is proposed in this paper that is based on the successive solution of a relaxed linear model and the original nonconvex nonlinear problem. Several examples are presented to illustrate that the proposed method often yields global or near global optimum solutions. The model is also extended for selecting different treatment technologies and for handling membrane separation modules.
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Parallel optimization algorithms and their implementation in VLSI design
NASA Technical Reports Server (NTRS)
Lee, G.; Feeley, J. J.
1991-01-01
Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
Teaching Optimal Design of Experiments Using a Spreadsheet
ERIC Educational Resources Information Center
Goos, Peter; Leemans, Herlinde
2004-01-01
In this paper, we present an interactive teaching approach to introduce the concept of optimal design of experiments to students. Our approach is based on the use of spreadsheets. One advantage of this approach is that no complex mathematical theory is needed nor that any design construction algorithm has to be discussed at the introductory stage.…
An Optimal Design Approach to Criterion-Referenced Computerized Testing
ERIC Educational Resources Information Center
Wiberg, Marie
2003-01-01
A criterion-referenced computerized test is expressed as a statistical hypothesis problem. This admits that it can be studied by using the theory of optimal design. The power function of the statistical test is used as a criterion function when designing the test. A formal proof is provided showing that all items should have the same item…