Science.gov

Sample records for optimal models model

  1. Modeling using optimization routines

    NASA Technical Reports Server (NTRS)

    Thomas, Theodore

    1995-01-01

    Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.

  2. HOMER® Micropower Optimization Model

    SciTech Connect

    Lilienthal, P.

    2005-01-01

    NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.

  3. Optimization in Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Marsden, Alison L.

    2014-01-01

    Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.

  4. Boiler modeling optimizes sootblowing

    SciTech Connect

    Piboontum, S.J.; Swift, S.M.; Conrad, R.S.

    2005-10-01

    Controlling the cleanliness and limiting the fouling and slagging of heat transfer surfaces are absolutely necessary to optimize boiler performance. The traditional way to clean heat-transfer surfaces is by sootblowing using air, steam, or water at regular intervals. But with the advent of fuel-switching strategies, such as switching to PRB coal to reduce a plant's emissions, the control of heating surface cleanliness has become more problematic for many owners of steam generators. Boiler modeling can help solve that problem. The article describes Babcock & Wilcox's Powerclean modeling system which consists of heating surface models that produce real-time cleanliness indexes. The Heat Transfer Manager (HTM) program is the core of the system, which can be used on any make or model of boiler. A case study is described to show how the system was successfully used at the 1,350 MW Unit 2 of the American Electric Power's Rockport Power Plant in Indiana. The unit fires a blend of eastern bituminous and Powder River Basin coal. 5 figs.

  5. Legal Policy Optimizing Models

    ERIC Educational Resources Information Center

    Nagel, Stuart; Neef, Marian

    1977-01-01

    The use of mathematical models originally developed by economists and operations researchers is described for legal process research. Situations involving plea bargaining, arraignment, and civil liberties illustrate the applicability of decision theory, inventory modeling, and linear programming in operations research. (LBH)

  6. Pyomo : Python Optimization Modeling Objects.

    SciTech Connect

    Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul

    2010-11-01

    The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.

  7. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  8. Optimal designs for copula models

    PubMed Central

    Perrone, E.; Müller, W.G.

    2016-01-01

    Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616

  9. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  10. How Optimal Is the Optimization Model?

    ERIC Educational Resources Information Center

    Heine, Bernd

    2013-01-01

    Pieter Muysken's article on modeling and interpreting language contact phenomena constitutes an important contribution.The approach chosen is a top-down one, building on the author's extensive knowledge of all matters relating to language contact. The paper aims at integrating a wide range of factors and levels of social, cognitive, and…

  11. Optimal Decision Making in Neural Inhibition Models

    ERIC Educational Resources Information Center

    van Ravenzwaaij, Don; van der Maas, Han L. J.; Wagenmakers, Eric-Jan

    2012-01-01

    In their influential "Psychological Review" article, Bogacz, Brown, Moehlis, Holmes, and Cohen (2006) discussed optimal decision making as accomplished by the drift diffusion model (DDM). The authors showed that neural inhibition models, such as the leaky competing accumulator model (LCA) and the feedforward inhibition model (FFI), can mimic the…

  12. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  13. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  14. Modelling and Optimizing Mathematics Learning in Children

    ERIC Educational Resources Information Center

    Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus

    2013-01-01

    This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…

  15. Enhanced index tracking modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. S.; Hj. Jaaman, Saiful Hafizah; Ismail, Hamizun bin

    2013-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. It is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the risk. Enhanced index tracking aims to generate excess return over the return achieved by the index without purchasing all of the stocks that make up the index by establishing an optimal portfolio. The objective of this study is to determine the optimal portfolio composition and performance by using weighted model in enhanced index tracking. Weighted model focuses on the trade-off between the excess return and the risk. The results of this study show that the optimal portfolio for the weighted model is able to outperform the Malaysia market index which is Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  16. Making models match measurements: Model optimization for morphogen patterning networks

    PubMed Central

    Hengenius, JB; Gribskov, MR; Rundell, AE; Umulis, DM

    2015-01-01

    Mathematical modeling of developmental signaling networks has played an increasingly important role in the identification of regulatory mechanisms by providing a sandbox for hypothesis testing and experiment design. Whether these models consist of an equation with a few parameters or dozens of equations with hundreds of parameters, a prerequisite to model-based discovery is to bring simulated behavior into agreement with observed data via parameter estimation. These parameters provide insight into the system (e.g., enzymatic rate constants describe enzyme properties). Depending on the nature of the model fit desired - from qualitative (relative spatial positions of phosphorylation) to quantitative (exact agreement of spatial position and concentration of gene products) - different measures of data-model mismatch are used to estimate different parameter values, which contain different levels of usable information and/or uncertainty. To facilitate the adoption of modeling as a tool for discovery alongside other tools such as genetics, immunostaining, and biochemistry, careful consideration needs to be given to how well a model fits the available data, what the optimized parameter values mean in a biological context, and how the uncertainty in model parameters and predictions plays into experiment design. The core discussion herein pertains to the quantification of model-to-data agreement, which constitutes the first measure of a model's performance and future utility to the problem at hand. Integration of this experimental data and the appropriate choice of objective measures of data-model agreement will continue to drive modeling forward as a tool that contributes to experimental discovery. The Drosophila melanogaster gap gene system, in which model parameters are optimized against in situ immunofluorescence intensities, demonstrates the importance of error quantification, which is applicable to a wide array of developmental modeling studies. PMID:25016297

  17. Incorporating routing into reservoir planning optimization models

    NASA Astrophysics Data System (ADS)

    Zmijewski, Nicholas; Wörman, Anders; Bottacin-Busolin, Andrea

    2015-04-01

    To achieve the best overall operation result in a reservoir network, optimization models are used. For larger reservoir networks the computational cost increases, making simplification of the hydrodynamic description necessary. In-accuracy in flow prediction can be related to an incurred sub-optimality in production planning. Flow behavior in a management optimization model is often described using a constant time-lag model. A simplified hydraulic model was used, describing the stream flow in a reservoir network for short term production planning of a case-study reservoir network (Dalälven River). In this study, the importance of incorporating hydrodynamic wave diffusion for optimized hydropower production planning in a regulated water system was examined, comparing the kinematic-wave model to the constant time-lag. The receding horizon optimization procedure was applied, emulating the data-assimilation procedure present in modern operations. Power production was shown to deviate from the planned production while considering a single time-lag, as a function of the stream description. The simplification of using a constant time-lag could be considered acceptable for streams characterized by high Peclet number. Examining the effect of the effect of the length of the decision time-step demonstrated the importance of high frequency data assimilation for streams characterized by low Peclet numbers. Further, it was shown that the variability in flow becomes more ordered as a result of management and that the Peclet number contributes to that goal.

  18. An overview of the optimization modelling applications

    NASA Astrophysics Data System (ADS)

    Singh, Ajay

    2012-10-01

    SummaryThe optimal use of available resources is of paramount importance in the backdrop of the increasing food, fiber, and other demands of the burgeoning global population and the shrinking resources. The optimal use of these resources can be determined by employing an optimization technique. The comprehensive reviews on the use of various programming techniques for the solution of different optimization problems have been provided in this paper. The past reviews are grouped into nine sections based on the solutions of the theme-based real world problems. The sections include: use of optimization modelling for conjunctive use planning, groundwater management, seawater intrusion management, irrigation management, achieving optimal cropping pattern, management of reservoir systems operation, management of resources in arid and semi-arid regions, solid waste management, and miscellaneous uses which comprise, managing problems of hydropower generation and sugar industry. Conclusions are drawn where gaps exist and more research needs to be focused.

  19. Improving Heliospheric Field Models with Optimized Coronal Models

    NASA Astrophysics Data System (ADS)

    Jones, S. I.; Davila, J. M.; Uritsky, V. M.

    2015-12-01

    The Solar Orbiter and Solar Probe Plus missions will travel closer to the sun than any previous mission, collecting unprecedented in situ data. This data can provide insight into coronal structure, energy transport, and evolution in the inner heliosphere. However, in order to take full advantage of this data, researchers need quality models of the inner heliosphere to connect the in situ observations to their coronal and photospheric sources. Developing quality models for this region of space has proved difficult, in part because the only part of the field that is accessible for routine measurement is the photosphere. The photospheric field measurements, though somewhat problematic, are used as boundary conditions for coronal models, which often neglect or over-simplify chromospheric conditions, and these coronal models are then used as boundary conditions to drive heliospheric models. The result is a great deal of uncertainty about the accuracy and reliability of the heliospheric models. Here we present a technique we are developing for improving global coronal magnetic field models by optimizing the models to conform to the field morphology observed in coronal images. This agreement between the coronal model and the basic morphology of the corona is essential for creating accurate heliospheric models. We will present results of early tests of two implementations of this idea, and its first application to real-world data.

  20. Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model

    NASA Astrophysics Data System (ADS)

    Wiezel, Oren; Or, Yizhar

    2016-11-01

    Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.

  1. Modeling optimal mineral nutrition for hazelnut micropropagation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Micropropagation of hazelnut (Corylus avellana L.) is typically difficult due to the wide variation in response among cultivars. This study was designed to overcome that difficulty by modeling the optimal mineral nutrients for micropropagation of C. avellana selections using a response surface desig...

  2. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  3. Generalized mathematical models in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, Panos Y.; Rao, J. R. Jagannatha

    1989-01-01

    The theory of optimality conditions of extremal problems can be extended to problems continuously deformed by an input vector. The connection between the sensitivity, well-posedness, stability and approximation of optimization problems is steadily emerging. The authors believe that the important realization here is that the underlying basis of all such work is still the study of point-to-set maps and of small perturbations, yet what has been identified previously as being just related to solution procedures is now being extended to study modeling itself in its own right. Many important studies related to the theoretical issues of parametric programming and large deformation in nonlinear programming have been reported in the last few years, and the challenge now seems to be in devising effective computational tools for solving these generalized design optimization models.

  4. Global Optimization Ensemble Model for Classification Methods

    PubMed Central

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  5. Graphical models for optimal power flow

    SciTech Connect

    Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; Vuffray, Marc; Misra, Sidhant

    2016-09-13

    Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithm for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.

  6. Graphical models for optimal power flow

    DOE PAGES

    Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...

    2016-09-13

    Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less

  7. Modeling and Global Optimization of DNA separation

    PubMed Central

    Fahrenkopf, Max A.; Ydstie, B. Erik; Mukherjee, Tamal; Schneider, James W.

    2014-01-01

    We develop a non-convex non-linear programming problem that determines the minimum run time to resolve different lengths of DNA using a gel-free micelle end-labeled free solution electrophoresis separation method. Our optimization framework allows for efficient determination of the utility of different DNA separation platforms and enables the identification of the optimal operating conditions for these DNA separation devices. The non-linear programming problem requires a model for signal spacing and signal width, which is known for many DNA separation methods. As a case study, we show how our approach is used to determine the optimal run conditions for micelle end-labeled free-solution electrophoresis and examine the trade-offs between a single capillary system and a parallel capillary system. Parallel capillaries are shown to only be beneficial for DNA lengths above 230 bases using a polydisperse micelle end-label otherwise single capillaries produce faster separations. PMID:24764606

  8. The trapped fluid transducer: modeling and optimization.

    PubMed

    Cheng, Lei; Grosh, Karl

    2008-06-01

    Exact and approximate formulas for calculating the sensitivity and bandwidth of an electroacoustic transducer with an enclosed or trapped fluid volume are developed. The transducer is composed of a fluid-filled rectangular duct with a tapered-width plate on one wall emulating the biological basilar membrane in the cochlea. A three-dimensional coupled fluid-structure model is developed to calculate the transducer sensitivity by using a boundary integral method. The model is used as the basis of an optimization methodology seeking to enhance the transducer performance. Simplified formulas are derived from the model to estimate the transducer sensitivity and the fundamental resonant frequency with good accuracy and much less computational cost. By using the simplified formulas, one can easily design the geometry of the transducer to achieve the optimal performance. As an example design, the transducer achieves a sensitivity of around -200 dB (1 VmuPa) at 10 kHz frequency range with piezoelectric sensing. In analogy to the cochlea, a tapered-width plate design is considered and shown to have a more uniform frequency response than a similar plate with no taper.

  9. Modeling, Analysis, and Optimization Issues for Large Space Structures

    NASA Technical Reports Server (NTRS)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  10. Utilizing computer models for optimizing classroom acoustics

    NASA Astrophysics Data System (ADS)

    Hinckley, Jennifer M.; Rosenberg, Carl J.

    2002-05-01

    The acoustical conditions in a classroom play an integral role in establishing an ideal learning environment. Speech intelligibility is dependent on many factors, including speech loudness, room finishes, and background noise levels. The goal of this investigation was to use computer modeling techniques to study the effect of acoustical conditions on speech intelligibility in a classroom. This study focused on a simulated classroom which was generated using the CATT-acoustic computer modeling program. The computer was utilized as an analytical tool in an effort to optimize speech intelligibility in a typical classroom environment. The factors that were focused on were reverberation time, location of absorptive materials, and background noise levels. Speech intelligibility was measured with the Rapid Speech Transmission Index (RASTI) method.

  11. Optimal evolution models for quantum tomography

    NASA Astrophysics Data System (ADS)

    Czerwiński, Artur

    2016-02-01

    The research presented in this article concerns the stroboscopic approach to quantum tomography, which is an area of science where quantum physics and linear algebra overlap. In this article we introduce the algebraic structure of the parametric-dependent quantum channels for 2-level and 3-level systems such that the generator of evolution corresponding with the Kraus operators has no degenerate eigenvalues. In such cases the index of cyclicity of the generator is equal to 1, which physically means that there exists one observable the measurement of which performed a sufficient number of times at distinct instants provides enough data to reconstruct the initial density matrix and, consequently, the trajectory of the state. The necessary conditions for the parameters and relations between them are introduced. The results presented in this paper seem to have considerable potential applications in experiments due to the fact that one can perform quantum tomography by conducting only one kind of measurement. Therefore, the analyzed evolution models can be considered optimal in the context of quantum tomography. Finally, we introduce some remarks concerning optimal evolution models in the case of n-dimensional Hilbert space.

  12. Optimal Combining Data for Improving Ocean Modeling

    DTIC Science & Technology

    2011-09-30

    of regional circulation models for accurate estimating the upper ocean velocity field, subsurface thermohaline structure, and mixing characteristics...high resolution circulation model - Incorporating subgrid Lagrangian models identified via drifter data into circulation models for improving...velocity field obtained from a realistic circulation model. 2. Constructing and testing fusion algorithms for combining glider observations with

  13. Application of simulation models for the optimization of business processes

    NASA Astrophysics Data System (ADS)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  14. Differentiating a Finite Element Biodegradation Simulation Model for Optimal Control

    NASA Astrophysics Data System (ADS)

    Minsker, Barbara S.; Shoemaker, Christine A.

    1996-01-01

    An optimal control model for improving the design of in situ bioremediation of groundwater has been developed. The model uses a finite element biodegradation simulation model called Bio2D to find optimal pumping strategies. Analytical derivatives of the bioremediation finite element model are derived; these derivatives must be computed for the optimal control algorithm. The derivatives are complex and nonlinear; the bulk of the computational effort in solving the optimal control problem is required to calculate the derivatives. An overview of the optimal control and simulation model formulations is also given.

  15. Optimization Models for Scheduling of Jobs.

    PubMed

    Indika, S H Sathish; Shier, Douglas R

    2006-01-01

    This work is motivated by a particular scheduling problem that is faced by logistics centers that perform aircraft maintenance and modification. Here we concentrate on a single facility (hangar) which is equipped with several work stations (bays). Specifically, a number of jobs have already been scheduled for processing at the facility; the starting times, durations, and work station assignments for these jobs are assumed to be known. We are interested in how best to schedule a number of new jobs that the facility will be processing in the near future. We first develop a mixed integer quadratic programming model (MIQP) for this problem. Since the exact solution of this MIQP formulation is time consuming, we develop a heuristic procedure, based on existing bin packing techniques. This heuristic is further enhanced by application of certain local optimality conditions.

  16. Optimization Models for Scheduling of Jobs

    PubMed Central

    Indika, S. H. Sathish; Shier, Douglas R.

    2006-01-01

    This work is motivated by a particular scheduling problem that is faced by logistics centers that perform aircraft maintenance and modification. Here we concentrate on a single facility (hangar) which is equipped with several work stations (bays). Specifically, a number of jobs have already been scheduled for processing at the facility; the starting times, durations, and work station assignments for these jobs are assumed to be known. We are interested in how best to schedule a number of new jobs that the facility will be processing in the near future. We first develop a mixed integer quadratic programming model (MIQP) for this problem. Since the exact solution of this MIQP formulation is time consuming, we develop a heuristic procedure, based on existing bin packing techniques. This heuristic is further enhanced by application of certain local optimality conditions. PMID:27274921

  17. Printer model inversion by constrained optimization

    NASA Astrophysics Data System (ADS)

    Cholewo, Tomasz J.

    1999-12-01

    This paper describes a novel method for finding colorant amounts for which a printer will produce a requested color appearance based on constrained optimization. An error function defines the gamut mapping method and black replacement method. The constraints limit the feasible solution region to the device gamut and prevent exceeding the maximum total area coverage. Colorant values corresponding to in-gamut colors are found with precision limited only by the accuracy of the device model. Out-of- gamut colors are mapped to colors within the boundary of the device gamut. This general approach, used in conjunction with different types of color difference equations, can perform a wide range of out-of-gamut mappings such as chroma clipping or for finding colors on gamut boundary having specified properties. We present an application of this method to the creation of PostScript color rendering dictionaries and ICC profiles.

  18. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    SciTech Connect

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  19. Modeling and Optimizing RF Multipole Ion Traps

    NASA Astrophysics Data System (ADS)

    Fanghaenel, Sven; Asvany, Oskar; Schlemmer, Stephan

    2016-06-01

    Radio frequency (rf) ion traps are very well suited for spectroscopy experiments thanks to the long time storage of the species of interest in a well defined volume. The electrical potential of the ion trap is determined by the geometry of its electrodes and the applied voltages. In order to understand the behavior of trapped ions in realistic multipole traps it is necessary to characterize these trapping potentials. Commercial programs like SIMION or COMSOL, employing the finite difference and/or finite element method, are often used to model the electrical fields of the trap in order to design traps for various purposes, e.g. introducing light from a laser into the trap volume. For a controlled trapping of ions, e.g. for low temperature trapping, the time dependent electrical fields need to be known to high accuracy especially at the minimum of the effective (mechanical) potential. The commercial programs are not optimized for these applications and suffer from a number of limitations. Therefore, in our approach the boundary element method (BEM) has been employed in home-built programs to generate numerical solutions of real trap geometries, e.g. from CAD drawings. In addition the resulting fields are described by appropriate multipole expansions. As a consequence, the quality of a trap can be characterized by a small set of multipole parameters which are used to optimize the trap design. In this presentation a few example calculations will be discussed. In particular the accuracy of the method and the benefits of describing the trapping potentials via multipole expansions will be illustrated. As one important application heating effects of cold ions arising from non-ideal multipole fields can now be understood as a consequence of imperfect field configurations.

  20. Geomagnetic modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Estes, R. H.

    1981-01-01

    The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

  1. Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.

    1998-01-01

    This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.

  2. Integrative systems modeling and multi-objective optimization

    EPA Science Inventory

    This presentation presents a number of algorithms, tools, and methods for utilizing multi-objective optimization within integrated systems modeling frameworks. We first present innovative methods using a genetic algorithm to optimally calibrate the VELMA and SWAT ecohydrological ...

  3. The Sandpile Model: Optimal Stress and Hormesis

    PubMed Central

    Stark, Martha

    2011-01-01

    The sandpile model (developed by chaos theorists) is an elegant visual metaphor for the cumulative impact of environmental stressors on complex adaptive systems – an impact that is paradoxical by virtue of the fact that the grains of sand being steadily added to the gradually evolving sandpile are the occasion for both its disruption and its repair. As a result, complex adaptive systems are continuously refashioning themselves at ever-higher levels of complexity and integration – not just in spite of “stressful” input from the outside but by way of it. Stressful input is therefore inherently neither bad (“poison”) nor good (“medication”). Rather, it will be how well the system (be it sandpile or living system) is able to process, integrate, and adapt to the stressful input that will make of it either a growth-disrupting (sandpile-destabilizing) event or a growth-promoting (sandpile-restabilizing) opportunity. Too much stress – “traumatic stress” – will be too overwhelming for the system to manage, triggering instead devastating breakdown. Too little stress will provide too little impetus for transformation and growth, serving instead simply to reinforce the system’s status quo. But just the right amount of stress – “optimal stress” – will provoke recovery by activating the system’s innate capacity to heal itself. PMID:22423229

  4. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  5. Improved Propulsion Modeling for Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Knittel, Jeremy M.; Englander, Jacob A.; Ozimek, Martin T.; Atchison, Justin A.; Gould, Julian J.

    2017-01-01

    Low-thrust trajectory design is tightly coupled with spacecraft systems design. In particular, the propulsion and power characteristics of a low-thrust spacecraft are major drivers in the design of the optimal trajectory. Accurate modeling of the power and propulsion behavior is essential for meaningful low-thrust trajectory optimization. In this work, we discuss new techniques to improve the accuracy of propulsion modeling in low-thrust trajectory optimization while maintaining the smooth derivatives that are necessary for a gradient-based optimizer. The resulting model is significantly more realistic than the industry standard and performs well inside an optimizer. A variety of deep-space trajectory examples are presented.

  6. Optimal estimator model for human spatial orientation

    NASA Technical Reports Server (NTRS)

    Borah, J.; Young, L. R.; Curry, R. E.

    1979-01-01

    A model is being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli. Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors. Central nervous system function is then modeled as a steady-state Kalman filter which blends information from the various sensors to form an estimate of spatial orientation. Where necessary, this linear central estimator has been augmented with nonlinear elements to reflect more accurately some highly nonlinear human response characteristics. Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended. Possible means are described for extending the model to better represent the active pilot with varying skill and work load levels.

  7. Visual prosthesis wireless energy transfer system optimal modeling

    PubMed Central

    2014-01-01

    Background Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. Methods On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling’s more accuracy for the actual application. Results The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. Conclusions The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system’s further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application. PMID:24428906

  8. A MILP-Model for the Optimization of Transports

    NASA Astrophysics Data System (ADS)

    Björk, Kaj-Mikael

    2010-09-01

    This paper presents a work in developing a mathematical model for the optimization of transports. The decisions to be made are routing decisions, truck assignment and the determination of the pickup order for a set of loads and available trucks. The model presented takes these aspects into account simultaneously. The MILP model is implemented in the Microsoft Excel environment, utilizing the LP-solve freeware as the optimization engine and Visual Basic for Applications as the modeling interface.

  9. Optimal Scaling of Interaction Effects in Generalized Linear Models

    ERIC Educational Resources Information Center

    van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.

    2009-01-01

    Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…

  10. Optimal Combining Data for Improving Ocean Modeling

    DTIC Science & Technology

    2012-09-30

    regional circulation models for accurate estimating the upper ocean velocity field, subsurface thermohaline structure, and mixing characteristics (2...data fusion in the framework of twin experiments with a high resolution circulation model and on real data - Combining radar data with tracer... thermohaline patterns and, second, separating space and time variability in glider observations for fast changing thermohaline structures (etc mesoscale fronts

  11. On Optimal Input Design and Model Selection for Communication Channels

    SciTech Connect

    Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  12. Stochastic Robust Mathematical Programming Model for Power System Optimization

    SciTech Connect

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  13. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  14. Model and method for optimizing heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Antamoshkin, O. A.; Antamoshkina, O. A.; Zelenkov, P. V.; Kovalev, I. V.

    2016-11-01

    Methodology of distributed computing performance boost by reduction of delays number is proposed. Concept of n-dimentional requirements triangle is introduced. Dynamic mathematical model of resource use in distributed computing systems is described.

  15. Stability and optimization in structured population models on graphs.

    PubMed

    Colombo, Rinaldo M; Garavello, Mauro

    2015-04-01

    We prove existence and uniqueness of solutions, continuous dependence from the initial datum and stability with respect to the boundary condition in a class of initial--boundary value problems for systems of balance laws. The particular choice of the boundary condition allows to comprehend models with very different structures. In particular, we consider a juvenile-adult model, the problem of the optimal mating ratio and a model for the optimal management of biological resources. The stability result obtained allows to tackle various optimal management/control problems, providing sufficient conditions for the existence of optimal choices/controls.

  16. Multipurpose optimization models for high level waste vitrification

    SciTech Connect

    Hoza, M.

    1994-08-01

    Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification.

  17. Process Model Construction and Optimization Using Statistical Experimental Design,

    DTIC Science & Technology

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  18. Modeling to optimize terminal stem cell differentiation.

    PubMed

    Gallicano, G Ian

    2013-01-01

    Embryonic stem cell (ESC), iPCs, and adult stem cells (ASCs) all are among the most promising potential treatments for heart failure, spinal cord injury, neurodegenerative diseases, and diabetes. However, considerable uncertainty in the production of ESC-derived terminally differentiated cell types has limited the efficiency of their development. To address this uncertainty, we and other investigators have begun to employ a comprehensive statistical model of ESC differentiation for determining the role of intracellular pathways (e.g., STAT3) in ESC differentiation and determination of germ layer fate. The approach discussed here applies the Baysian statistical model to cell/developmental biology combining traditional flow cytometry methodology and specific morphological observations with advanced statistical and probabilistic modeling and experimental design. The final result of this study is a unique tool and model that enhances the understanding of how and when specific cell fates are determined during differentiation. This model provides a guideline for increasing the production efficiency of therapeutically viable ESCs/iPSCs/ASC derived neurons or any other cell type and will eventually lead to advances in stem cell therapy.

  19. Modeling to Optimize Terminal Stem Cell Differentiation

    PubMed Central

    Gallicano, G. Ian

    2013-01-01

    Embryonic stem cell (ESC), iPCs, and adult stem cells (ASCs) all are among the most promising potential treatments for heart failure, spinal cord injury, neurodegenerative diseases, and diabetes. However, considerable uncertainty in the production of ESC-derived terminally differentiated cell types has limited the efficiency of their development. To address this uncertainty, we and other investigators have begun to employ a comprehensive statistical model of ESC differentiation for determining the role of intracellular pathways (e.g., STAT3) in ESC differentiation and determination of germ layer fate. The approach discussed here applies the Baysian statistical model to cell/developmental biology combining traditional flow cytometry methodology and specific morphological observations with advanced statistical and probabilistic modeling and experimental design. The final result of this study is a unique tool and model that enhances the understanding of how and when specific cell fates are determined during differentiation. This model provides a guideline for increasing the production efficiency of therapeutically viable ESCs/iPSCs/ASC derived neurons or any other cell type and will eventually lead to advances in stem cell therapy. PMID:24278782

  20. COBRA-SFS modifications and cask model optimization

    SciTech Connect

    Rector, D.R.; Michener, T.E.

    1989-01-01

    Spent-fuel storage systems are complex systems and developing a computational model for one can be a difficult task. The COBRA-SFS computer code provides many capabilities for modeling the details of these systems, but these capabilities can also allow users to specify a more complex model than necessary. This report provides important guidance to users that dramatically reduces the size of the model while maintaining the accuracy of the calculation. A series of model optimization studies was performed, based on the TN-24P spent-fuel storage cask, to determine the optimal model geometry. Expanded modeling capabilities of the code are also described. These include adding fluid shear stress terms and a detailed plenum model. The mathematical models for each code modification are described, along with the associated verification results. 22 refs., 107 figs., 7 tabs.

  1. Optimizing glassy p-spin models.

    PubMed

    Thomas, Creighton K; Katzgraber, Helmut G

    2011-04-01

    Computing the ground state of Ising spin-glass models with p-spin interactions is, in general, an NP-hard problem. In this work we show that unlike in the case of the standard Ising spin glass with two-spin interactions, computing ground states with p=3 is an NP-hard problem even in two space dimensions. Furthermore, we present generic exact and heuristic algorithms for finding ground states of p-spin models with high confidence for systems of up to several thousand spins.

  2. Optimization Model for Reducing Emissions of Greenhouse ...

    EPA Pesticide Factsheets

    The EPA Vehicle Greenhouse Gas (VGHG) model is used to apply various technologies to a defined set of vehicles in order to meet a specified GHG emission target, and to then calculate the costs and benefits of doing so. To facilitate its analysis of the costs and benefits of the control of GHG emissions from cars and trucks.

  3. Optimal Combining Data for Improving Ocean Modeling

    DTIC Science & Technology

    2013-09-30

    improving diagnosis and prediction of meso- and submesoscale processes in coastal frontal zones. Our theoretical findings in studying finite-size... Submesoscale physical- biogeochemical coupling across the Ligurian Current (northwestern Mediterranean) using a bio- optical glider, Limnol. Oceanogr...Garraffo, and L. Piterbarg , 2012, Parameterization of Submesoscale Transport in the Gulf Stream Region Using Lagrangian Subgridscale Models, Ocean

  4. A Model for Optimal Constrained Adaptive Testing.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Reese, Lynda M.

    1998-01-01

    Proposes a model for constrained computerized adaptive testing in which the information in the test at the trait level (theta) estimate is maximized subject to the number of possible constraints on the content of the test. Test assembly relies on a linear-programming approach. Illustrates the approach through simulation with items from the Law…

  5. Computational Model Optimization for Enzyme Design Applications

    DTIC Science & Technology

    2007-11-02

    naturally occurring E. coli chorismate mutase (EcCM) enzyme through computational design. Although the stated milestone of creating a novel... chorismate mutase (CM) was not achieved, the enhancement of the underlying computational model through the development of the two-body PB method will facilitate the future design of novel protein catalysts.

  6. Geomagnetic field modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Data sets selected for mini-batches and the software modifications required for processing these sets are described. Initial analysis was performed on minibatch field model recovery. Studies are being performed to examine the convergence of the solutions and the maximum expansion order the data will support in the constant and secular terms.

  7. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  8. Design Optimization of Coronary Stent Based on Finite Element Models

    PubMed Central

    Qiu, Tianshuang; Zhu, Bao; Wu, Jinying

    2013-01-01

    This paper presents an effective optimization method using the Kriging surrogate model combing with modified rectangular grid sampling to reduce the stent dogboning effect in the expansion process. An infilling sampling criterion named expected improvement (EI) is used to balance local and global searches in the optimization iteration. Four commonly used finite element models of stent dilation were used to investigate stent dogboning rate. Thrombosis models of three typical shapes are built to test the effectiveness of optimization results. Numerical results show that two finite element models dilated by pressure applied inside the balloon are available, one of which with the artery and plaque can give an optimal stent with better expansion behavior, while the artery and plaque unincluded model is more efficient and takes a smaller amount of computation. PMID:24222743

  9. Simple model for predicting microchannel heat sink performance and optimization

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Hsun; Chein, Reiyu

    2012-05-01

    A simple model was established to predict microchannel heat sink performance based on energy balance. Both hydrodynamically and thermally developed effects were included. Comparisons with the experimental data show that this model provides satisfactory thermal resistance prediction. The model is further extended to carry out geometric optimization on the microchannel heat sink. The results from the simple model are in good agreement as compared with those obtained from three-dimensional simulations.

  10. First-Order Frameworks for Managing Models in Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natlia M.; Lewis, Robert Michael

    2000-01-01

    Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.

  11. Modeling, Instrumentation, Automation, and Optimization of Water Resource Recovery Facilities.

    PubMed

    Sweeney, Michael W; Kabouris, John C

    2016-10-01

    A review of the literature published in 2015 on topics relating to water resource recovery facilities (WRRF) in the areas of modeling, automation, measurement and sensors and optimization of wastewater treatment (or water resource reclamation) is presented.

  12. Research on web performance optimization principles and models

    NASA Astrophysics Data System (ADS)

    Wang, Xin

    2013-03-01

    The Internet high speed development, causes Web the optimized question to be getting more and more prominent, therefore the Web performance optimizes into inevitably. the first principle of Web Performance Optimization is to understand, to know that income will have to pay, and return is diminishing; Simultaneously the probability will decrease Web the performance, and will start from the highest level to optimize obtained biggest. Web Technical models to improve the performance are: sharing costs, high-speed caching, profiles, parallel processing, simplified treatment. Based on this study, given the crucial Web performance optimization recommendations, which improve the performance of Web usage, accelerate the efficient use of Internet has an important significance.

  13. Reducing long-term remedial costs by transport modeling optimization.

    PubMed

    Becker, David; Minsker, Barbara; Greenwald, Robert; Zhang, Yan; Harre, Karla; Yager, Kathleen; Zheng, Chunmiao; Peralta, Richard

    2006-01-01

    The Department of Defense (DoD) Environmental Security Technology Certification Program and the Environmental Protection Agency sponsored a project to evaluate the benefits and utility of contaminant transport simulation-optimization algorithms against traditional (trial and error) modeling approaches. Three pump-and-treat facilities operated by the DoD were selected for inclusion in the project. Three optimization formulations were developed for each facility and solved independently by three modeling teams (two using simulation-optimization algorithms and one applying trial-and-error methods). The results clearly indicate that simulation-optimization methods are able to search a wider range of well locations and flow rates and identify better solutions than current trial-and-error approaches. The solutions found were 5% to 50% better than those obtained using trial-and-error (measured using optimal objective function values), with an average improvement of approximately 20%. This translated into potential savings ranging from 600,000 dollars to 10,000,000 dollars for the three sites. In nearly all cases, the cost savings easily outweighed the costs of the optimization. To reduce computational requirements, in some cases the simulation-optimization groups applied multiple mathematical algorithms, solved a series of modified subproblems, and/or fit "meta-models" such as neural networks or regression models to replace time-consuming simulation models in the optimization algorithm. The optimal solutions did not account for the uncertainties inherent in the modeling process. This project illustrates that transport simulation-optimization techniques are practical for real problems. However, applying the techniques in an efficient manner requires expertise and should involve iterative modification to the formulations based on interim results.

  14. Surrogate-Based Optimization of Biogeochemical Transport Models

    NASA Astrophysics Data System (ADS)

    Prieß, Malte; Slawig, Thomas

    2010-09-01

    First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.

  15. An Optimality-Based Fully-Distributed Watershed Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, L., Jr.

    2015-12-01

    Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial

  16. Jet Pump Design Optimization by Multi-Surrogate Modeling

    NASA Astrophysics Data System (ADS)

    Mohan, S.; Samad, A.

    2014-09-01

    A basic approach to reduce the design and optimization time via surrogate modeling is to select a right type of surrogate model for a particular problem, where the model should have better accuracy and prediction capability. A multi-surrogate approach can protect a designer to select a wrong surrogate having high uncertainty in the optimal zone of the design space. Numerical analysis and optimization of a jet pump via multi-surrogate modeling have been reported in this work. Design variables including area ratio, mixing tube length to diameter ratio and setback ratio were introduced to increase the hydraulic efficiency of the jet pump. Reynolds-averaged Navier-Stokes equations were solved and responses were computed. Among different surrogate models, Sheppard function based surrogate shows better accuracy in data fitting while the radial basis neural network produced highest enhanced efficiency. The efficiency enhancement was due to the reduction of losses in the flow passage.

  17. Jet Pump Design Optimization by Multi-Surrogate Modeling

    NASA Astrophysics Data System (ADS)

    Mohan, S.; Samad, A.

    2015-01-01

    A basic approach to reduce the design and optimization time via surrogate modeling is to select a right type of surrogate model for a particular problem, where the model should have better accuracy and prediction capability. A multi-surrogate approach can protect a designer to select a wrong surrogate having high uncertainty in the optimal zone of the design space. Numerical analysis and optimization of a jet pump via multi-surrogate modeling have been reported in this work. Design variables including area ratio, mixing tube length to diameter ratio and setback ratio were introduced to increase the hydraulic efficiency of the jet pump. Reynolds-averaged Navier-Stokes equations were solved and responses were computed. Among different surrogate models, Sheppard function based surrogate shows better accuracy in data fitting while the radial basis neural network produced highest enhanced efficiency. The efficiency enhancement was due to the reduction of losses in the flow passage.

  18. Portfolio optimization for index tracking modelling in Malaysia stock market

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Ismail, Hamizun

    2016-06-01

    Index tracking is an investment strategy in portfolio management which aims to construct an optimal portfolio to generate similar mean return with the stock market index mean return without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using the optimization model which adopts regression approach in tracking the benchmark stock market index return. In this study, the data consists of weekly price of stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2013. The results of this study show that the optimal portfolio is able to track FBMKLCI Index at minimum tracking error of 1.0027% with 0.0290% excess mean return over the mean return of FBMKLCI Index. The significance of this study is to construct the optimal portfolio using optimization model which adopts regression approach in tracking the stock market index without purchasing all index components.

  19. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  20. Minimax D-Optimal Designs for Item Response Theory Models.

    ERIC Educational Resources Information Center

    Berger, Martjin P. F.; King, C. Y. Joy; Wong, Weng Kee

    2000-01-01

    Proposed minimax designs for item response theory (IRT) models to overcome the problem of local optimality. Compared minimax designs to sequentially constructed designs for the two parameter logistic model. Results show that minimax designs can be nearly as efficient as sequentially constructed designs. (Author/SLD)

  1. Model updating based on an affine scaling interior optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Y. X.; Jia, C. X.; Li, Jian; Spencer, B. F.

    2013-11-01

    Finite element model updating is usually considered as an optimization process. Affine scaling interior algorithms are powerful optimization algorithms that have been developed over the past few years. A new finite element model updating method based on an affine scaling interior algorithm and a minimization of modal residuals is proposed in this article, and a general finite element model updating program is developed based on the proposed method. The performance of the proposed method is studied through numerical simulation and experimental investigation using the developed program. The results of the numerical simulation verified the validity of the method. Subsequently, the natural frequencies obtained experimentally from a three-dimensional truss model were used to update a finite element model using the developed program. After updating, the natural frequencies of the truss and finite element model matched well.

  2. Optimal Control In Predation Of Models And Mimics

    NASA Astrophysics Data System (ADS)

    Tsoularis, A.

    2007-09-01

    This paper examines optimal predation by a predator preying upon two types of prey, modes and mimics. Models are unpalatable prey and mimics are palatable prey resembling the models so as to derive some protection from predation. This biological phenomenon is known in Ecology as Batesian mimicry. An optimal control problem in continuous time is formulated with the sole objective to maximize the net energetic benefit to the predator from predation in the presence of evolving prey populations. The constrained optimal control is bang-bang with the scalar control taken as the probability of attacking prey. Conditions for the existence of singular controls are obtained.

  3. Optimal vaccination and treatment of an epidemic network model

    NASA Astrophysics Data System (ADS)

    Chen, Lijuan; Sun, Jitao

    2014-08-01

    In this Letter, we firstly propose an epidemic network model incorporating two controls which are vaccination and treatment. For the constant controls, by using Lyapunov function, global stability of the disease-free equilibrium and the endemic equilibrium of the model is investigated. For the non-constant controls, by using the optimal control strategy, we discuss an optimal strategy to minimize the total number of the infected and the cost associated with vaccination and treatment. Table 1 and Figs. 1-5 are presented to show the global stability and the efficiency of this optimal control.

  4. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  5. An aircraft noise pollution model for trajectory optimization

    NASA Technical Reports Server (NTRS)

    Barkana, A.; Cook, G.

    1976-01-01

    A mathematical model describing the generation of aircraft noise is developed with the ultimate purpose of reducing noise (noise-optimizing landing trajectories) in terminal areas. While the model is for a specific aircraft (Boeing 737), the methodology would be applicable to a wide variety of aircraft. The model is used to obtain a footprint on the ground inside of which the noise level is at or above 70 dB.

  6. Optimal schooling formations using a potential flow model

    NASA Astrophysics Data System (ADS)

    Tchieu, Andrew; Gazzola, Mattia; de Brauer, Alexia; Koumoutsakos, Petros

    2012-11-01

    A self-propelled, two-dimensional, potential flow model for agent-based swimmers is used to examine how fluid coupling affects schooling formation. The potential flow model accounts for fluid-mediated interactions between swimmers. The model is extended to include individual agent actions by means of modifying the circulation of each swimmer. A reinforcement algorithm is applied to allow the swimmers to learn how to school in specified lattice formations. Lastly, schooling lattice configurations are optimized by combining reinforcement learning and evolutionary optimization to minimize total control effort and energy expenditure.

  7. AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT

    NASA Astrophysics Data System (ADS)

    Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi

    In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.

  8. General model for boring tool optimization

    NASA Astrophysics Data System (ADS)

    Moraru, G. M.; rbes, M. V. Ze; Popescu, L. G.

    2016-08-01

    Optimizing a tool (and therefore those for boring) consist in improving its performance through maximizing the objective functions chosen by the designer and/or by user. In order to define and to implement the proposed objective functions, contribute numerous features and performance required by tool users. Incorporation of new features makes the cutting tool to be competitive in the market and to meet user requirements.

  9. Large-scale spherical fixed bed reactors: Modeling and optimization

    SciTech Connect

    Hartig, F.; Keil, F.J. )

    1993-03-01

    Iterative dynamic programming (IDP) according to Luus was used for the optimization of the methanol production in a cascade of spherical reactors. The system of three spherical reactors was compared to an externally cooled tubular reactor and a quench reactor. The reactors were modeled by the pseudohomogeneous and heterogeneous approach. The effectiveness factors of the heterogeneous model were calculated by the dusty gas model. The IDP method was compared with sequential quadratic programming (SQP) and the Box complex method. The optimized distributions of catalyst volume with the pseudohomogeneous and heterogeneous model lead to different results. The IDP method finds the global optimum with high probability. A combination of IDP and SQP provides a reliable optimization procedure that needs minimum computing time.

  10. Impulsive optimal control model for the trajectory of horizontal wells

    NASA Astrophysics Data System (ADS)

    Li, An; Feng, Enmin; Wang, Lei

    2009-01-01

    This paper presents an impulsive optimal control model for solving the optimal designing problem of the trajectory of horizontal wells. We take fully into account the effect of unknown disturbances in drilling. The optimal control problem can be converted into a nonlinear parametric optimization by integrating the state equation. We discuss here that the locally optimal solution depends in a continuous way on the parameters (disturbances) and utilize this property to propose a revised Hooke-Jeeves algorithm. The uniform design technique is incorporated into the revised Hooke-Jeeves algorithm to handle the multimodal objective function. The numerical simulation is in accordance with theoretical results. The numerical results illustrate the validity of the model and efficiency of the algorithm.

  11. Optimization of murine model for Besnoitia caprae.

    PubMed

    Oryan, A; Sadoughifar, R; Namavari, M

    2016-09-01

    It has been shown that mice, particularly the BALB/c ones, are susceptible to infection by some of the apicomplexan parasites. To compare the susceptibility of the inbred BALB/c, outbred BALB/c and C57 BL/6 to Besnoitia caprae inoculation and to determine LD50, 30 male inbred BALB/c, 30 outbred BALB/c and 30 C57 BL/6 mice were assigned into 18 groups of 5 mice. Each group was inoculated intraperitoneally with 12.5 × 10(3), 25 × 10(3), 5 × 10(4), 1 × 10(5), 2 × 10(5) tachyzoites and a control inoculum of DMEM, respectively. The inbred BALB/c was found the most susceptible strain among the experienced mice strains so the LD50 per inbred BALB/c mouse was calculated as 12.5 × 10(3.6) tachyzoites while the LD50 for the outbred BALB/c and C57 BL/6 was 25 × 10(3.4) and 5 × 10(4) tachyzoites per mouse, respectively. To investigate the impact of different routes of inoculation in the most susceptible mice strain, another seventy five male inbred BALB/c mice were inoculated with 2 × 10(5) tachyzoites of B. caprae via various inoculation routes including: subcutaneous, intramuscular, intraperitoneal, infraorbital and oral. All the mice in the oral and infraorbital groups survived for 60 days, whereas the IM group showed quicker death and more severe pathologic lesions, which was then followed by SC and IP groups. Therefore, BALB/c mouse is a proper laboratory model and IM inoculation is an ideal method in besnoitiosis induction and a candidate in treatment, prevention and testing the efficacy of vaccines for besnoitiosis.

  12. Multidisciplinary optimization in aircraft design using analytic technology models

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1991-01-01

    An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.

  13. Modeling urban air pollution with optimized hierarchical fuzzy inference system.

    PubMed

    Tashayo, Behnam; Alimohammadi, Abbas

    2016-10-01

    Environmental exposure assessments (EEA) and epidemiological studies require urban air pollution models with appropriate spatial and temporal resolutions. Uncertain available data and inflexible models can limit air pollution modeling techniques, particularly in under developing countries. This paper develops a hierarchical fuzzy inference system (HFIS) to model air pollution under different land use, transportation, and meteorological conditions. To improve performance, the system treats the issue as a large-scale and high-dimensional problem and develops the proposed model using a three-step approach. In the first step, a geospatial information system (GIS) and probabilistic methods are used to preprocess the data. In the second step, a hierarchical structure is generated based on the problem. In the third step, the accuracy and complexity of the model are simultaneously optimized with a multiple objective particle swarm optimization (MOPSO) algorithm. We examine the capabilities of the proposed model for predicting daily and annual mean PM2.5 and NO2 and compare the accuracy of the results with representative models from existing literature. The benefits provided by the model features, including probabilistic preprocessing, multi-objective optimization, and hierarchical structure, are precisely evaluated by comparing five different consecutive models in terms of accuracy and complexity criteria. Fivefold cross validation is used to assess the performance of the generated models. The respective average RMSEs and coefficients of determination (R (2)) for the test datasets using proposed model are as follows: daily PM2.5 = (8.13, 0.78), annual mean PM2.5 = (4.96, 0.80), daily NO2 = (5.63, 0.79), and annual mean NO2 = (2.89, 0.83). The obtained results demonstrate that the developed hierarchical fuzzy inference system can be utilized for modeling air pollution in EEA and epidemiological studies.

  14. Block-oriented modeling of superstructure optimization problems

    SciTech Connect

    Friedman, Z; Ingalls, J; Siirola, JD; Watson, JP

    2013-10-15

    We present a novel software framework for modeling large-scale engineered systems as mathematical optimization problems. A key motivating feature in such systems is their hierarchical, highly structured topology. Existing mathematical optimization modeling environments do not facilitate the natural expression and manipulation of hierarchically structured systems. Rather, the modeler is forced to "flatten" the system description, hiding structure that may be exploited by solvers, and obfuscating the system that the modeling environment is attempting to represent. To correct this deficiency, we propose a Python-based "block-oriented" modeling approach for representing the discrete components within the system. Our approach is an extension of the Pyomo library for specifying mathematical optimization problems. Through the use of a modeling components library, the block-oriented approach facilitates a clean separation of system superstructure from the details of individual components. This approach also naturally lends itself to expressing design and operational decisions as disjunctive expressions over the component blocks. By expressing a mathematical optimization problem in a block-oriented manner, inherent structure (e.g., multiple scenarios) is preserved for potential exploitation by solvers. In particular, we show that block-structured mathematical optimization problems can be straightforwardly manipulated by decomposition-based multi-scenario algorithmic strategies, specifically in the context of the PySP stochastic programming library. We illustrate our block-oriented modeling approach using a case study drawn from the electricity grid operations domain: unit commitment with transmission switching and N - 1 reliability constraints. Finally, we demonstrate that the overhead associated with block-oriented modeling only minimally increases model instantiation times, and need not adversely impact solver behavior. (C) 2013 Elsevier Ltd. All rights reserved.

  15. Multiobjective muffler shape optimization with hybrid acoustics modeling.

    PubMed

    Airaksinen, Tuomas; Heikkola, Erkki

    2011-09-01

    This paper considers the combined use of a hybrid numerical method for the modeling of acoustic mufflers and a genetic algorithm for multiobjective optimization. The hybrid numerical method provides accurate modeling of sound propagation in uniform waveguides with non-uniform obstructions. It is based on coupling a wave based modal solution in the uniform sections of the waveguide to a finite element solution in the non-uniform component. Finite element method provides flexible modeling of complicated geometries, varying material parameters, and boundary conditions, while the wave based solution leads to accurate treatment of non-reflecting boundaries and straightforward computation of the transmission loss (TL) of the muffler. The goal of optimization is to maximize TL at multiple frequency ranges simultaneously by adjusting chosen shape parameters of the muffler. This task is formulated as a multiobjective optimization problem with the objectives depending on the solution of the simulation model. NSGA-II genetic algorithm is used for solving the multiobjective optimization problem. Genetic algorithms can be easily combined with different simulation methods, and they are not sensitive to the smoothness properties of the objective functions. Numerical experiments demonstrate the accuracy and feasibility of the model-based optimization method in muffler design.

  16. Optimal control of information epidemics modeled as Maki Thompson rumors

    NASA Astrophysics Data System (ADS)

    Kandhway, Kundan; Kuri, Joy

    2014-12-01

    We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.

  17. Optimization models for flight test scheduling

    NASA Astrophysics Data System (ADS)

    Holian, Derreck

    As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated

  18. A dynamic optimization model for solid waste recycling.

    PubMed

    Anghinolfi, Davide; Paolucci, Massimo; Robba, Michela; Taramasso, Angela Celeste

    2013-02-01

    Recycling is an important part of waste management (that includes different kinds of issues: environmental, technological, economic, legislative, social, etc.). Differently from many works in literature, this paper is focused on recycling management and on the dynamic optimization of materials collection. The developed dynamic decision model is characterized by state variables, corresponding to the quantity of waste in each bin per each day, and control variables determining the quantity of material that is collected in the area each day and the routes for collecting vehicles. The objective function minimizes the sum of costs minus benefits. The developed decision model is integrated in a GIS-based Decision Support System (DSS). A case study related to the Cogoleto municipality is presented to show the effectiveness of the proposed model. From optimal results, it has been found that the net benefits of the optimized collection are about 2.5 times greater than the estimated current policy.

  19. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Bader, Jon B.

    2010-01-01

    Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.

  20. Integer programming model for optimizing bus timetable using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Buono, A.; Silalahi, B. P.

    2017-01-01

    Bus timetable gave an information for passengers to ensure the availability of bus services. Timetable optimal condition happened when bus trips frequency could adapt and suit with passenger demand. In the peak time, the number of bus trips would be larger than the off-peak time. If the number of bus trips were more frequent than the optimal condition, it would make a high operating cost for bus operator. Conversely, if the number of trip was less than optimal condition, it would make a bad quality service for passengers. In this paper, the bus timetabling problem would be solved by integer programming model with modified genetic algorithm. Modification was placed in the chromosomes design, initial population recovery technique, chromosomes reconstruction and chromosomes extermination on specific generation. The result of this model gave the optimal solution with accuracy 99.1%.

  1. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  2. An uncertain multidisciplinary design optimization method using interval convex models

    NASA Astrophysics Data System (ADS)

    Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong

    2013-06-01

    This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.

  3. Spectral optimization and uncertainty quantification in combustion modeling

    NASA Astrophysics Data System (ADS)

    Sheen, David Allan

    Reliable simulations of reacting flow systems require a well-characterized, detailed chemical model as a foundation. Accuracy of such a model can be assured, in principle, by a multi-parameter optimization against a set of experimental data. However, the inherent uncertainties in the rate evaluations and experimental data leave a model still characterized by some finite kinetic rate parameter space. Without a careful analysis of how this uncertainty space propagates into the model's predictions, those predictions can at best be trusted only qualitatively. In this work, the Method of Uncertainty Minimization using Polynomial Chaos Expansions is proposed to quantify these uncertainties. In this method, the uncertainty in the rate parameters of the as-compiled model is quantified. Then, the model is subjected to a rigorous multi-parameter optimization, as well as a consistency-screening process. Lastly, the uncertainty of the optimized model is calculated using an inverse spectral optimization technique, and then propagated into a range of simulation conditions. An as-compiled, detailed H2/CO/C1-C4 kinetic model is combined with a set of ethylene combustion data to serve as an example. The idea that the hydrocarbon oxidation model should be understood and developed in a hierarchical fashion has been a major driving force in kinetics research for decades. How this hierarchical strategy works at a quantitative level, however, has never been addressed. In this work, we use ethylene and propane combustion as examples and explore the question of hierarchical model development quantitatively. The Method of Uncertainty Minimization using Polynomial Chaos Expansions is utilized to quantify the amount of information that a particular combustion experiment, and thereby each data set, contributes to the model. This knowledge is applied to explore the relationships among the combustion chemistry of hydrogen/carbon monoxide, ethylene, and larger alkanes. Frequently, new data will

  4. Optimal policies for a finite-horizon batching inventory model

    NASA Astrophysics Data System (ADS)

    Al-Khamis, Talal M.; Benkherouf, Lakdere; Omar, Mohamed

    2014-10-01

    This paper is concerned with finding an optimal inventory policy for the integrated replenishment-production batching model of Omar and Smith (2002). Here, a company produces a single finished product which requires a single raw material and the objective is to minimise the total inventory costs over a finite planning horizon. Earlier work in the literature considered models with linear demand rate function of the finished product. This work proposes a general methodology for finding an optimal inventory policy for general demand rate functions. The proposed methodology is adapted from the recent work of Benkherouf and Gilding (2009).

  5. Data visualization optimization via computational modeling of perception.

    PubMed

    Pineo, Daniel; Ware, Colin

    2012-02-01

    We present a method for automatically evaluating and optimizing visualizations using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and primary visual cortex. The neural activity resulting from viewing flow visualizations is simulated and evaluated to produce a metric of visualization effectiveness. Visualization optimization is achieved by applying this effectiveness metric as the utility function in a hill-climbing algorithm. We apply this method to the evaluation and optimization of 2D flow visualizations, using two visualization parameterizations: streaklet-based and pixel-based. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment. It had been previously hypothesized the effectiveness of head-to-tail alignment results from the perceptual processing of the visual system, but this theory had not been computationally modeled. A second optimization using a pixel-based parameterization resulted in a LIC-like result. The implications in terms of the selection of primitives is discussed. We argue that computational models can be used for optimizing complex visualizations. In addition, we argue that they can provide a means of computationally evaluating perceptual theories of visualization, and as a method for quality control of display methods.

  6. Shell model of optimal passive-scalar mixing

    NASA Astrophysics Data System (ADS)

    Miles, Christopher; Doering, Charles

    2015-11-01

    Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.

  7. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  8. Optimization Method for Solution Model of Laser Tracker Multilateration Measurement

    NASA Astrophysics Data System (ADS)

    Chen, Hongfang; Tan, Zhi; Shi, Zhaoyao; Song, Huixu; Yan, Hao

    2016-08-01

    Multilateration measurement using laser trackers suffers from a cumbersome solution method for high-precision measurements. Errors are induced by the self-calibration routines of the laser tracker software. This paper describes an optimization solution model for laser tracker multilateration measurement, which effectively inhibits the negative effect of this self-calibration, and further, analyzes the accuracy of the singular value decomposition for the described solution model. Experimental verification for the solution model based on laser tracker and coordinate measuring machine (CMM) was performed. The experiment results show that the described optimization model for laser tracker multilateration measurement has good accuracy control, and has potentially broad application in the field of laser tracker spatial localization.

  9. Applied topology optimization of vibro-acoustic hearing instrument models

    NASA Astrophysics Data System (ADS)

    Søndergaard, Morten Birkmose; Pedersen, Claus B. W.

    2014-02-01

    Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.

  10. Turbulence Model Discovery with Data-Driven Learning and Optimization

    NASA Astrophysics Data System (ADS)

    King, Ryan; Hamlington, Peter

    2016-11-01

    Data-driven techniques have emerged as a useful tool for model development in applications where first-principles approaches are intractable. In this talk, data-driven multi-task learning techniques are used to discover flow-specific optimal turbulence closure models. We use the recently introduced autonomic closure technique to pose an online supervised learning problem created by test filtering turbulent flows in the self-similar inertial range. The autonomic closure is modified to solve the learning problem for all stress components simultaneously with multi-task learning techniques. The closure is further augmented with a feature extraction step that learns a set of orthogonal modes that are optimal at predicting the turbulent stresses. We demonstrate that these modes can be severely truncated to enable drastic reductions in computational costs without compromising the model accuracy. Furthermore, we discuss the potential universality of the extracted features and implications for reduced order modeling of other turbulent flows.

  11. Modeling and optimization of a semiregenerative catalytic naphtha reformer

    SciTech Connect

    Taskar, U.; Riggs, J.B.

    1997-03-01

    Modeling and optimization of a semiregenerative catalytic naphtha reformer has been carried out considering most of its key constituent units. A detailed kinetic scheme involving 35 pseudocomponents connected by a network of 36 reactions in the C{sub 5}-C{sub 10} range was modeled using Hougen-Watson Langmuir-Hinshelwood-type reaction-rate expressions. Deactivation of the catalyst was modeled by including the corresponding equations for coking kinetics. The overall kinetic model was parameterized by bench-marking against industrial plant data using a feed-characterization procedure developed to infer the composition of the chemical species in the feed and reformate from their measured ASTM distillation data. For the initial optimization studies, a constant reactor inlet temperature configuration that would lead to optimum operation over the entire catalyst life cycle was identified. The analysis was extended to study the time-optimal control profiles of decision variables over the run length. In addition, the constant octane case was also studied. The improvement in the objective function achieved in each case was determined. Finally, the sensitivity of the optimal results to uncertainty in reactor-model parameters was evaluated.

  12. Optimal experiment design for model selection in biochemical networks

    PubMed Central

    2014-01-01

    Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498

  13. Time dependent optimal switching controls in online selling models

    SciTech Connect

    Bradonjic, Milan; Cohen, Albert

    2010-01-01

    We present a method to incorporate dishonesty in online selling via a stochastic optimal control problem. In our framework, the seller wishes to maximize her average wealth level W at a fixed time T of her choosing. The corresponding Hamilton-Jacobi-Bellmann (HJB) equation is analyzed for a basic case. For more general models, the admissible control set is restricted to a jump process that switches between extreme values. We propose a new approach, where the optimal control problem is reduced to a multivariable optimization problem.

  14. Pumping Optimization Model for Pump and Treat Systems - 15091

    SciTech Connect

    Baker, S.; Ivarson, Kristine A.; Karanovic, M.; Miller, Charles W.; Tonkin, M.

    2015-01-15

    Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It produces predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.

  15. Velocity model optimization for surface microseismic monitoring via amplitude stacking

    NASA Astrophysics Data System (ADS)

    Jiang, Haiyu; Wang, Zhongren; Zeng, Xiaoxian; Lü, Hao; Zhou, Xiaohua; Chen, Zubin

    2016-12-01

    A usable velocity model in microseismic projects plays a crucial role in achieving statistically reliable microseismic event locations. Existing methods for velocity model optimization rely mainly on picking arrival times at individual receivers. However, for microseismic monitoring with surface stations, seismograms of perforation shots have such low signal-to-noise ratios (S/N) that they do not yield sufficiently reliable picks. In this study, we develop a framework for constructing a 1-D flat-layered a priori velocity model using a non-linear optimization technique based on amplitude stacking. The energy focusing of the perforation shot is improved thanks to very fast simulated annealing (VFSA), and the accuracies of shot relocations are used to evaluate whether the resultant velocity model can be used for microseismic event location. Our method also includes a conventional migration-based location technique that utilizes successive grid subdivisions to improve computational efficiency and source location accuracy. Because unreasonable a priori velocity model information and interference due to additive noise are the major contributors to inaccuracies in perforation shot locations, we use velocity model optimization as a compensation scheme. Using synthetic tests, we show that accurate locations of perforation shots can be recovered to within 2 m, even with pre-stack S/N ratios as low as 0.1 at individual receivers. By applying the technique to a coal-bed gas reservoir in Western China, we demonstrate that perforation shot location can be recovered to within the tolerance of the well tip location.

  16. Aeroelastic Optimization Study Based on X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley; Pak, Chan-Gi

    2014-01-01

    A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.

  17. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  18. Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.

    ERIC Educational Resources Information Center

    Belgard, Maria R.; Min, Leo Yoon-Gee

    An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…

  19. GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS

    SciTech Connect

    Rogers, Adam; Fiege, Jason D.

    2011-02-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.

  20. Geometry Modeling and Grid Generation for Design and Optimization

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1998-01-01

    Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.

  1. Development and Implementation of Practical Optimal LES Models

    DTIC Science & Technology

    2007-03-31

    moment equation simulations using QNA were quite accurate for small time intervals and displayed unphysical behavior only after long simulation times 16...flux model for the optimization of the variance of the model. After long simulation times, the results from the dynamic models exhibit inaccurate large...155) 2rf8 + f ′ 7 + 5rf10 + r 2f ′10 = 0 (156) r2f ′12 + 7rf12 + f ′ 11 = 0 (157) After the imposition of the continuity constraint, RI and RQ have

  2. Verifying and Validating Proposed Models for FSW Process Optimization

    NASA Technical Reports Server (NTRS)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  3. Cost Optimization Model for Business Applications in Virtualized Grid Environments

    NASA Astrophysics Data System (ADS)

    Strebel, Jörg

    The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.

  4. Metabolic modeling of Saccharomyces cerevisiae using the optimal control of homeostasis: a cybernetic model definition.

    PubMed

    Giuseppin, M L; van Riel, N A

    2000-01-01

    A model is presented to describe the observed behavior of microorganisms that aim at metabolic homeostasis while growing and adapting to their environment in an optimal way. The cellular metabolism is seen as a network with a multiple controller system with both feedback and feedforward control, i.e., a model based on a dynamic optimal metabolic control. The dynamic network consists of aggregated pathways, each having a control setpoint for the metabolic states at a given growth rate. This set of strategies of the cell forms a true cybernetic model with a minimal number of assumptions. The cellular strategies and constraints were derived from metabolic flux analysis using an identified, biochemically relevant, stoichiometry matrix derived from experimental data on the cellular composition of continuous cultures of Saccharomyces cerevisiae. Based on these data a cybernetic model was developed to study its dynamic behavior. The growth rate of the cell is determined by the structural compounds and fluxes of compounds related to central metabolism. In contrast to many other cybernetic models, the minimal model does not consist of any assumed internal kinetic parameters or interactions. This necessitates the use of a stepwise integration with an optimization of the fluxes at every time interval. Some examples of the behavior of this model are given with respect to steady states and pulse responses. This model is very suitable for describing semiquantitatively dynamics of global cellular metabolism and may form a useful framework for including structured and more detailed kinetic models.

  5. Discover for Yourself: An Optimal Control Model in Insect Colonies

    ERIC Educational Resources Information Center

    Winkel, Brian

    2013-01-01

    We describe the enlightening path of self-discovery afforded to the teacher of undergraduate mathematics. This is demonstrated as we find and develop background material on an application of optimal control theory to model the evolutionary strategy of an insect colony to produce the maximum number of queen or reproducer insects in the colony at…

  6. Metabolic engineering with multi-objective optimization of kinetic models.

    PubMed

    Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Balsa-Canto, Eva; Banga, Julio R

    2016-03-20

    Kinetic models have a great potential for metabolic engineering applications. They can be used for testing which genetic and regulatory modifications can increase the production of metabolites of interest, while simultaneously monitoring other key functions of the host organism. This work presents a methodology for increasing productivity in biotechnological processes exploiting dynamic models. It uses multi-objective dynamic optimization to identify the combination of targets (enzymatic modifications) and the degree of up- or down-regulation that must be performed in order to optimize a set of pre-defined performance metrics subject to process constraints. The capabilities of the approach are demonstrated on a realistic and computationally challenging application: a large-scale metabolic model of Chinese Hamster Ovary cells (CHO), which are used for antibody production in a fed-batch process. The proposed methodology manages to provide a sustained and robust growth in CHO cells, increasing productivity while simultaneously increasing biomass production, product titer, and keeping the concentrations of lactate and ammonia at low values. The approach presented here can be used for optimizing metabolic models by finding the best combination of targets and their optimal level of up/down-regulation. Furthermore, it can accommodate additional trade-offs and constraints with great flexibility.

  7. Optimal Control of a Dengue Epidemic Model with Vaccination

    NASA Astrophysics Data System (ADS)

    Rodrigues, Helena Sofia; Teresa, M.; Monteiro, T.; Torres, Delfim F. M.

    2011-09-01

    We present a SIR+ASI epidemic model to describe the interaction between human and dengue fever mosquito populations. A control strategy in the form of vaccination, to decrease the number of infected individuals, is used. An optimal control approach is applied in order to find the best way to fight the disease.

  8. Water-resources optimization model for Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, T.

    1998-01-01

    A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.

  9. Analytical models integrated with satellite images for optimized pest management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The global field protection (GFP) was developed to protect and optimize pest management resources integrating satellite images for precise field demarcation with physical models of controlled release devices of pesticides to protect large fields. The GFP was implemented using a graphical user interf...

  10. To the optimization problem in minority game model

    NASA Astrophysics Data System (ADS)

    Yanishevsky, Vasyl

    2009-12-01

    The article presents the research results of the optimization problem in minority game model to a gaussian approximation using replica symmetry breaking by one step (1RSB). A comparison to replica symmetry approximation (RS) and the results from literary sources received using other methods has been held.

  11. Effective and efficient algorithm for multiobjective optimization of hydrologic models

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Gupta, Hoshin V.; Bastidas, Luis A.; Bouten, Willem; Sorooshian, Soroosh

    2003-08-01

    Practical experience with the calibration of hydrologic models suggests that any single-objective function, no matter how carefully chosen, is often inadequate to properly measure all of the characteristics of the observed data deemed to be important. One strategy to circumvent this problem is to define several optimization criteria (objective functions) that measure different (complementary) aspects of the system behavior and to use multicriteria optimization to identify the set of nondominated, efficient, or Pareto optimal solutions. In this paper, we present an efficient and effective Markov Chain Monte Carlo sampler, entitled the Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm, which is capable of solving the multiobjective optimization problem for hydrologic models. MOSCEM is an improvement over the Shuffled Complex Evolution Metropolis (SCEM-UA) global optimization algorithm, using the concept of Pareto dominance (rather than direct single-objective function evaluation) to evolve the initial population of points toward a set of solutions stemming from a stable distribution (Pareto set). The efficacy of the MOSCEM-UA algorithm is compared with the original MOCOM-UA algorithm for three hydrologic modeling case studies of increasing complexity.

  12. Risk-based Multiobjective Optimization Model for Bridge Maintenance Planning

    SciTech Connect

    Yang, I-T.; Hsu, Y.-S.

    2010-05-21

    Determining the optimal maintenance plan is essential for successful bridge management. The optimization objectives are defined in the forms of minimizing life-cycle cost and maximizing performance indicators. Previous bridge maintenance models assumed the process of bridge deterioration and the estimate of maintenance cost are deterministic, i.e., known with certainty. This assumption, however, is invalid especially with estimates over a long time horizon of bridge life. In this study, we consider the risks associated with bridge deterioration and maintenance cost in determining the optimal maintenance plan. The decisions variables include the strategic choice of essential maintenance (such as silane treatment and cathodic protection), and the intervals between periodic maintenance. A epsilon-constrained Particle Swarm Optimization algorithm is used to approximate the tradeoff between life-cycle cost and performance indicators. During stochastic search for optimal solutions, Monte-Carlo simulation is used to evaluate the impact of risks on the objective values, at an acceptance level of reliability. The proposed model can facilitate decision makers to select the compromised maintenance plan with a group of alternative choices, each of which leads to a different level of performance and life-cycle cost. A numerical example is used to illustrate the proposed model.

  13. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.

    PubMed

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  14. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    PubMed Central

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957

  15. Modeling of biological intelligence for SCM system optimization.

    PubMed

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  16. Modeling of Biological Intelligence for SCM System Optimization

    PubMed Central

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724

  17. Optimization of Ultrafilter Feed Conditions Using Classical Filtration Models

    SciTech Connect

    Geeting, John GH; Hallen, Richard T.; Peterson, Reid A.

    2005-11-15

    Two classical models were evaluated to assess their applicability to test data obtained from filtration of a High Level Waste Sludge sample from the Hanford tank farms. One model was then selected for use in evaluation of the optimal feed conditions for maximizing filter throughput for the proposed Waste Treatment Plant at the Hanford site. This analysis indicates that an optimal feed composition does exists, but that this optimal composition is different depending upon the product (permeate or retentate) that is to be maximized. A basic premise of the design for the WTP had been that evaporation of the feed to 5 M Na (or higher if possible) was required to achieve optimum throughput. However, these results indicate that optimum throughput from a filtration perspective is achieved at lower sodium molarities (either 3.22 M for maximum LAW throughput or 4.33 M for maximum HLW throughput).

  18. Multi-objective global optimization for hydrologic models

    NASA Astrophysics Data System (ADS)

    Yapo, Patrice Ogou; Gupta, Hoshin Vijai; Sorooshian, Soroosh

    1998-01-01

    The development of automated (computer-based) calibration methods has focused mainly on the selection of a single-objective measure of the distance between the model-simulated output and the data and the selection of an automatic optimization algorithm to search for the parameter values which minimize that distance. However, practical experience with model calibration suggests that no single-objective function is adequate to measure the ways in which the model fails to match the important characteristics of the observed data. Given that some of the latest hydrologic models simulate several of the watershed output fluxes (e.g. water, energy, chemical constituents, etc.), there is a need for effective and efficient multi-objective calibration procedures capable of exploiting all of the useful information about the physical system contained in the measurement data time series. The MOCOM-UA algorithm, an effective and efficient methodology for solving the multiple-objective global optimization problem, is presented in this paper. The method is an extension of the successful SCE-UA single-objective global optimization algorithm. The features and capabilities of MOCOM-UA are illustrated by means of a simple hydrologic model calibration study.

  19. Optimized volume models of earthquake-triggered landslides

    PubMed Central

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-01-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212

  20. Optimization of Regression Models of Experimental Data Using Confirmation Points

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.

  1. Optimal control in a model of malaria with differential susceptibility

    NASA Astrophysics Data System (ADS)

    Hincapié, Doracelly; Ospina, Juan

    2014-06-01

    A malaria model with differential susceptibility is analyzed using the optimal control technique. In the model the human population is classified as susceptible, infected and recovered. Susceptibility is assumed dependent on genetic, physiological, or social characteristics that vary between individuals. The model is described by a system of differential equations that relate the human and vector populations, so that the infection is transmitted to humans by vectors, and the infection is transmitted to vectors by humans. The model considered is analyzed using the optimal control method when the control consists in using of insecticide-treated nets and educational campaigns; and the optimality criterion is to minimize the number of infected humans, while keeping the cost as low as is possible. One first goal is to determine the effects of differential susceptibility in the proposed control mechanism; and the second goal is to determine the algebraic form of the basic reproductive number of the model. All computations are performed using computer algebra, specifically Maple. It is claimed that the analytical results obtained are important for the design and implementation of control measures for malaria. It is suggested some future investigations such as the application of the method to other vector-borne diseases such as dengue or yellow fever; and also it is suggested the possible application of free software of computer algebra like Maxima.

  2. Optimized volume models of earthquake-triggered landslides.

    PubMed

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-07-12

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.

  3. A neural network model of reliably optimized spike transmission.

    PubMed

    Samura, Toshikazu; Ikegaya, Yuji; Sato, Yasuomi D

    2015-06-01

    We studied the detailed structure of a neuronal network model in which the spontaneous spike activity is correctly optimized to match the experimental data and discuss the reliability of the optimized spike transmission. Two stochastic properties of the spontaneous activity were calculated: the spike-count rate and synchrony size. The synchrony size, expected to be an important factor for optimization of spike transmission in the network, represents a percentage of observed coactive neurons within a time bin, whose probability approximately follows a power-law. We systematically investigated how these stochastic properties could matched to those calculated from the experimental data in terms of the log-normally distributed synaptic weights between excitatory and inhibitory neurons and synaptic background activity induced by the input current noise in the network model. To ensure reliably optimized spike transmission, the synchrony size as well as spike-count rate were simultaneously optimized. This required changeably balanced log-normal distributions of synaptic weights between excitatory and inhibitory neurons and appropriately amplified synaptic background activity. Our results suggested that the inhibitory neurons with a hub-like structure driven by intensive feedback from excitatory neurons were a key factor in the simultaneous optimization of the spike-count rate and synchrony size, regardless of different spiking types between excitatory and inhibitory neurons.

  4. Aeroelastic Optimization Study Based on the X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley W.; Pak, Chan-Gi

    2014-01-01

    One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.

  5. Simulation/optimization modeling for robust pumping strategy design.

    PubMed

    Kalwij, Ineke M; Peralta, Richard C

    2006-01-01

    A new simulation/optimization modeling approach is presented for addressing uncertain knowledge of aquifer parameters. The Robustness Enhancing Optimizer (REO) couples genetic algorithm and tabu search as optimizers and incorporates aquifer parameter sensitivity analysis to guide multiple-realization optimization. The REO maximizes strategy robustness for a pumping strategy that is optimal for a primary objective function (OF), such as cost. The more robust a strategy, the more likely it is to achieve management goals in the field, even if the physical system differs from the model. The REO is applied to trinitrotoluene and Royal Demolition Explosive plumes at Umatilla Chemical Depot in Oregon to develop robust least cost strategies. The REO efficiently develops robust pumping strategies while maintaining the optimal value of the primary OF-differing from the common situation in which a primary OF value degrades as strategy reliability increases. The REO is especially valuable where data to develop realistic probability density functions (PDFs) or statistically derived realizations are unavailable. Because they require much less field data, REO-developed strategies might not achieve as high a mathematical reliability as strategies developed using many realizations based upon real aquifer parameter PDFs. REO-developed strategies might or might not yield a better OF value in the field.

  6. A simple model of optimal population coding for sensory systems.

    PubMed

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  7. Health benefit modelling and optimization of vehicular pollution control strategies

    NASA Astrophysics Data System (ADS)

    Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra

    2012-12-01

    This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific

  8. Multiview coding mode decision with hybrid optimal stopping model.

    PubMed

    Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay

    2013-04-01

    In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.

  9. A model for HIV/AIDS pandemic with optimal control

    NASA Astrophysics Data System (ADS)

    Sule, Amiru; Abdullah, Farah Aini

    2015-05-01

    Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.

  10. Optimal Observation Network Design for Model Discrimination using Information Theory and Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T. C.

    2014-12-01

    Groundwater systems are complex and subject to multiple interpretations and conceptualizations due to a lack of sufficient information. As a result, multiple conceptual models are often developed and their mean predictions are preferably used to avoid biased predictions from using a single conceptual model. Yet considering too many conceptual models may lead to high prediction uncertainty and may lose the purpose of model development. In order to reduce the number of models, an optimal observation network design is proposed based on maximizing the Kullback-Leibler (KL) information to discriminate competing models. The KL discrimination function derived by Box and Hill [1967] for one additional observation datum at a time is expanded to account for multiple independent spatiotemporal observations. The Bayesian model averaging (BMA) method is used to incorporate existing data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. To consider the future observation uncertainty, the Monte Carlo realizations of BMA predicted future observations are used to calculate the mean and variance of posterior model probabilities of the competing models. The goal of the optimal observation network design is to find the number and location of observation wells and sampling rounds such that the highest posterior model probability of a model is larger than a desired probability criterion (e.g., 95%). The optimal observation network design is implemented to a groundwater study in the Baton Rouge area, Louisiana to collect new groundwater heads from USGS wells. The considered sources of uncertainty that create multiple groundwater models are the geological architecture, the boundary condition, and the fault permeability architecture. All possible design solutions are enumerated using high performance computing systems. Results show that total model variance (the sum of within-model variance and between-model

  11. Linear versus quadratic portfolio optimization model with transaction cost

    NASA Astrophysics Data System (ADS)

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  12. Parameter optimization in differential geometry based solvation models

    PubMed Central

    Wang, Bao; Wei, G. W.

    2015-01-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304

  13. Optimal model-free prediction from multivariate time series

    NASA Astrophysics Data System (ADS)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  14. Optimal model-free prediction from multivariate time series.

    PubMed

    Runge, Jakob; Donner, Reik V; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  15. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  16. Collision-free nonuniform dynamics within continuous optimal velocity models

    NASA Astrophysics Data System (ADS)

    Tordeux, Antoine; Seyfried, Armin

    2014-10-01

    Optimal velocity (OV) car-following models give with few parameters stable stop-and -go waves propagating like in empirical data. Unfortunately, classical OV models locally oscillate with vehicles colliding and moving backward. In order to solve this problem, the models have to be completed with additional parameters. This leads to an increase of the complexity. In this paper, a new OV model with no additional parameters is defined. For any value of the inputs, the model is intrinsically asymmetric and collision-free. This is achieved by using a first-order ordinary model with two predecessors in interaction, instead of the usual inertial delayed first-order or second-order models coupled with the predecessor. The model has stable uniform solutions as well as various stable stop-and -go patterns with bimodal distribution of the speed. As observable in real data, the modal speed values in congested states are not restricted to the free flow speed and zero. They depend on the form of the OV function. Properties of linear, concave, convex, or sigmoid speed functions are explored with no limitation due to collisions.

  17. Boundary condition optimal control problem in lava flow modelling

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, Alik; Korotkii, Alexander; Tsepelev, Igor; Kovtunov, Dmitry; Melnik, Oleg

    2016-04-01

    We study a problem of steady-state fluid flow with known thermal conditions (e.g., measured temperature and the heat flux at the surface of lava flow) at one segment of the model boundary and unknown conditions at its another segment. This problem belongs to a class of boundary condition optimal control problems and can be solved by data assimilation from one boundary to another using direct and adjoint models. We derive analytically the adjoint model and test the cost function and its gradient, which minimize the misfit between the known thermal condition and its model counterpart. Using optimization algorithms, we iterate between the direct and adjoint problems and determine the missing boundary condition as well as thermal and dynamic characteristics of the fluid flow. The efficiency of optimization algorithms - Polak-Ribiere conjugate gradient and the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithms - have been tested with the aim to get a rapid convergence to the solution of this inverse ill-posed problem. Numerical results show that temperature and velocity can be determined with a high accuracy in the case of smooth input data. A noise imposed on the input data results in a less accurate solution, but still acceptable below some noise level.

  18. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    NASA Astrophysics Data System (ADS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-07-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.

  19. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  20. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  1. Advanced Nuclear Fuel Cycle Transitions: Optimization, Modeling Choices, and Disruptions

    NASA Astrophysics Data System (ADS)

    Carlsen, Robert W.

    Many nuclear fuel cycle simulators have evolved over time to help understan the nuclear industry/ecosystem at a macroscopic level. Cyclus is one of th first fuel cycle simulators to accommodate larger-scale analysis with it liberal open-source licensing and first-class Linux support. Cyclus also ha features that uniquely enable investigating the effects of modeling choices o fuel cycle simulators and scenarios. This work is divided into thre experiments focusing on optimization, effects of modeling choices, and fue cycle uncertainty. Effective optimization techniques are developed for automatically determinin desirable facility deployment schedules with Cyclus. A novel method fo mapping optimization variables to deployment schedules is developed. Thi allows relationships between reactor types and scenario constraints to b represented implicitly in the variable definitions enabling the usage o optimizers lacking constraint support. It also prevents wasting computationa resources evaluating infeasible deployment schedules. Deployed power capacit over time and deployment of non-reactor facilities are also included a optimization variables There are many fuel cycle simulators built with different combinations o modeling choices. Comparing results between them is often difficult. Cyclus flexibility allows comparing effects of many such modeling choices. Reacto refueling cycle synchronization and inter-facility competition among othe effects are compared in four cases each using combinations of fleet of individually modeled reactors with 1-month or 3-month time steps. There are noticeable differences in results for the different cases. The larges differences occur during periods of constrained reactor fuel availability This and similar work can help improve the quality of fuel cycle analysi generally There is significant uncertainty associated deploying new nuclear technologie such as time-frames for technology availability and the cost of buildin advanced reactors

  2. Stochastic optimal velocity model and its long-lived metastability.

    PubMed

    Kanai, Masahiro; Nishinari, Katsuhiro; Tokihiro, Tetsuji

    2005-09-01

    In this paper, we propose a stochastic cellular automaton model of traffic flow extending two exactly solvable stochastic models, i.e., the asymmetric simple exclusion process and the zero range process. Moreover, it is regarded as a stochastic extension of the optimal velocity model. In the fundamental diagram (flux-density diagram), our model exhibits several regions of density where more than one stable state coexists at the same density in spite of the stochastic nature of its dynamical rule. Moreover, we observe that two long-lived metastable states appear for a transitional period, and that the dynamical phase transition from a metastable state to another metastable/stable state occurs sharply and spontaneously.

  3. A mathematical model on the optimal timing of offspring desertion.

    PubMed

    Seno, Hiromi; Endo, Hiromi

    2007-06-07

    We consider the offspring desertion as the optimal strategy for the deserter parent, analyzing a mathematical model for its expected reproductive success. It is shown that the optimality of the offspring desertion significantly depends on the offsprings' birth timing in the mating season, and on the other ecological parameters characterizing the innate nature of considered animals. Especially, the desertion is less likely to occur for the offsprings born in the later period of mating season. It is also implied that the offspring desertion after a partially biparental care would be observable only with a specific condition.

  4. CPOPT : optimization for fitting CANDECOMP/PARAFAC models.

    SciTech Connect

    Dunlavy, Daniel M.; Kolda, Tamara Gibson; Acar, Evrim

    2008-10-01

    Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.

  5. Dynamic stochastic optimization models for air traffic flow management

    NASA Astrophysics Data System (ADS)

    Mukherjee, Avijit

    This dissertation presents dynamic stochastic optimization models for Air Traffic Flow Management (ATFM) that enables decisions to adapt to new information on evolving capacities of National Airspace System (NAS) resources. Uncertainty is represented by a set of capacity scenarios, each depicting a particular time-varying capacity profile of NAS resources. We use the concept of a scenario tree in which multiple scenarios are possible initially. Scenarios are eliminated as possibilities in a succession of branching points, until the specific scenario that will be realized on a particular day is known. Thus the scenario tree branching provides updated information on evolving scenarios, and allows ATFM decisions to be re-addressed and revised. First, we propose a dynamic stochastic model for a single airport ground holding problem (SAGHP) that can be used for planning Ground Delay Programs (GDPs) when there is uncertainty about future airport arrival capacities. Ground delays of non-departed flights can be revised based on updated information from scenario tree branching. The problem is formulated so that a wide range of objective functions, including non-linear delay cost functions and functions that reflect equity concerns can be optimized. Furthermore, the model improves on existing practice by ensuring efficient use of available capacity without necessarily exempting long-haul flights. Following this, we present a methodology and optimization models that can be used for decentralized decision making by individual airlines in the GDP planning process, using the solutions from the stochastic dynamic SAGHP. Airlines are allowed to perform cancellations, and re-allocate slots to remaining flights by substitutions. We also present an optimization model that can be used by the FAA, after the airlines perform cancellation and substitutions, to re-utilize vacant arrival slots that are created due to cancellations. Finally, we present three stochastic integer programming

  6. Web malware spread modelling and optimal control strategies.

    PubMed

    Liu, Wanping; Zhong, Shouming

    2017-02-10

    The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.

  7. Web malware spread modelling and optimal control strategies

    PubMed Central

    Liu, Wanping; Zhong, Shouming

    2017-01-01

    The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice. PMID:28186203

  8. Optimization of wind farm performance using low-order models

    NASA Astrophysics Data System (ADS)

    Dabiri, John; Brownstein, Ian

    2015-11-01

    A low order model that captures the dominant flow behaviors in a vertical-axis wind turbine (VAWT) array is used to maximize the power output of wind farms utilizing VAWTs. The leaky Rankine body model (LRB) was shown by Araya et al. (JRSE 2014) to predict the ranking of individual turbine performances in an array to within measurement uncertainty as compared to field data collected from full-scale VAWTs. Further, this model is able to predict array performance with significantly less computational expense than higher fidelity numerical simulations of the flow, making it ideal for use in optimization of wind farm performance. This presentation will explore the ability of the LRB model to rank the relative power output of different wind turbine array configurations as well as the ranking of individual array performance over a variety of wind directions, using various complex configurations tested in the field and simpler configurations tested in a wind tunnel. Results will be presented in which the model is used to determine array fitness in an evolutionary algorithm seeking to find optimal array configurations given a number of turbines, area of available land, and site wind direction profile. Comparison with field measurements will be presented.

  9. Web malware spread modelling and optimal control strategies

    NASA Astrophysics Data System (ADS)

    Liu, Wanping; Zhong, Shouming

    2017-02-01

    The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.

  10. Roll levelling semi-analytical model for process optimization

    NASA Astrophysics Data System (ADS)

    Silvestre, E.; Garcia, D.; Galdos, L.; Saenz de Argandoña, E.; Mendiguren, J.

    2016-08-01

    Roll levelling is a primary manufacturing process used to remove residual stresses and imperfections of metal strips in order to make them suitable for subsequent forming operations. In the last years the importance of this process has been evidenced with the apparition of Ultra High Strength Steels with strength > 900 MPa. The optimal setting of the machine as well as a robust machine design has become critical for the correct processing of these materials. Finite Element Method (FEM) analysis is the widely used technique for both aspects. However, in this case, the FEM simulation times are above the admissible ones in both machine development and process optimization. In the present work, a semi-analytical model based on a discrete bending theory is presented. This model is able to calculate the critical levelling parameters i.e. force, plastification rate, residual stresses in a few seconds. First the semi-analytical model is presented. Next, some experimental industrial cases are analyzed by both the semi-analytical model and the conventional FEM model. Finally, results and computation times of both methods are compared.

  11. High Resolution Beam Modeling and Optimization with IMPACT

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2017-01-01

    The LCLS-II, a new BES x-ray FEL facility at SLAC, is being designed using the IMPACT simulation code which includes a full model for the electron beam transport with 3-D space charge effects as well as IntraBeam Scattering and Coherent Synchrotron Radiation. A 22 parameter optimization is being used to find injector and linac configurations that achieve the design specifications. The detailed physics models in IMPACT are being benchmarked against experiments at LCLS. This work was done in collaboration with SLAC LCLS-II design team and supported by the DOE under contract No. DE-AC02-05CH11231.

  12. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Bader, Jon B.

    2009-01-01

    Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.

  13. Optimizing the lithography model calibration algorithms for NTD process

    NASA Astrophysics Data System (ADS)

    Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.

    2016-03-01

    As patterns shrink to the resolution limits of up-to-date ArF immersion lithography technology, negative tone development (NTD) process has been an increasingly adopted technique to get superior imaging quality through employing bright-field (BF) masks to print the critical dark-field (DF) metal and contact layers. However, from the fundamental materials and process interaction perspectives, several key differences inherently exist between NTD process and the traditional positive tone development (PTD) system, especially the horizontal/vertical resist shrinkage and developer depletion effects, hence the traditional resist parameters developed for the typical PTD process have no longer fit well in NTD process modeling. In order to cope with the inherent differences between PTD and NTD processes accordingly get improvement on NTD modeling accuracy, several NTD models with different combinations of complementary terms were built to account for the NTD-specific resist shrinkage, developer depletion and diffusion, and wafer CD jump induced by sub threshold assistance feature (SRAF) effects. Each new complementary NTD term has its definite aim to deal with the NTD-specific phenomena. In this study, the modeling accuracy is compared among different models for the specific patterning characteristics on various feature types. Multiple complementary NTD terms were finally proposed to address all the NTD-specific behaviors simultaneously and further optimize the NTD modeling accuracy. The new algorithm of multiple complementary NTD term tested on our critical dark-field layers demonstrates consistent model accuracy improvement for both calibration and verification.

  14. Optimal symmetric flight with an intermediate vehicle model

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.

    1983-01-01

    Optimal flight in the vertical plane with a vehicle model intermediate in complexity between the point-mass and energy models is studied. Flight-path angle takes on the role of a control variable. Range-open problems feature subarcs of vertical flight and singular subarcs. The class of altitude-speed-range-time optimization problems with fuel expenditure unspecified is investigated and some interesting phenomena uncovered. The maximum-lift-to-drag glide appears as part of the family, final-time-open, with appropriate initial and terminal transient exceeding level-flight drag, some members exhibiting oscillations. Oscillatory paths generally fail the Jacobi test for durations exceeding a period and furnish a minimum only for short-duration problems.

  15. Optimized GPU simulation of continuous-spin glass models

    NASA Astrophysics Data System (ADS)

    Yavors'kii, T.; Weigel, M.

    2012-08-01

    We develop a highly optimized code for simulating the Edwards-Anderson Heisenberg model on graphics processing units (GPUs). Using a number of computational tricks such as tiling, data compression and appropriate memory layouts, the simulation code combining over-relaxation, heat bath and parallel tempering moves achieves a peak performance of 0.29 ns per spin update on realistic system sizes, corresponding to a more than 150 fold speed-up over a serial CPU reference implementation. The optimized implementation is used to study the spin-glass transition in a random external magnetic field to probe the existence of a de Almeida-Thouless line in the model, for which we give benchmark results.

  16. Combining multiobjective optimization and Bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    NASA Astrophysics Data System (ADS)

    WöHling, Thomas; Vrugt, Jasper A.

    2008-12-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multiobjective optimization and Bayesian model averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multiobjective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM and used to generate four different model ensembles. These ensembles are postprocessed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multiobjective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  17. U.S. Army Delayed Entry Program Optimization Model

    DTIC Science & Technology

    2004-08-01

    changing policy. Chapter 5 addresses the issue of optimizing the EDEP to include: objectives and metrics for a model , alternative solution methods, and...personnel surplus to flow into the training bases. Accessions and Recruiting Command extensively use the DEP for smoothing the seasonal recruiting...changes or other unpredictable to meet school requirements events (ex. Sept. 11) 4 Equity problem related to differences 5. Relief from direct

  18. Modeling Microinverters and DC Power Optimizers in PVWatts

    SciTech Connect

    MacAlpine, S.; Deline, C.

    2015-02-01

    Module-level distributed power electronics including microinverters and DC power optimizers are increasingly popular in residential and commercial PV systems. Consumers are realizing their potential to increase design flexibility, monitor system performance, and improve energy capture. It is becoming increasingly important to accurately model PV systems employing these devices. This document summarizes existing published documents to provide uniform, impartial recommendations for how the performance of distributed power electronics can be reflected in NREL's PVWatts calculator (http://pvwatts.nrel.gov/).

  19. Mathematical model of the metal mould surface temperature optimization

    SciTech Connect

    Mlynek, Jaroslav Knobloch, Roman; Srb, Radek

    2015-11-30

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  20. Mathematical model of the metal mould surface temperature optimization

    NASA Astrophysics Data System (ADS)

    Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek

    2015-11-01

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  1. Optimization model of vaccination strategy for dengue transmission

    NASA Astrophysics Data System (ADS)

    Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.

    2014-02-01

    Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.

  2. Optimization in generalized linear models: A case study

    NASA Astrophysics Data System (ADS)

    Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina

    2016-06-01

    The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.

  3. Optimizing multi-pinhole SPECT geometries using an analytical model

    NASA Astrophysics Data System (ADS)

    Rentmeester, M. C. M.; van der Have, F.; Beekman, F. J.

    2007-05-01

    State-of-the-art multi-pinhole SPECT devices allow for sub-mm resolution imaging of radio-molecule distributions in small laboratory animals. The optimization of multi-pinhole and detector geometries using simulations based on ray-tracing or Monte Carlo algorithms is time-consuming, particularly because many system parameters need to be varied. As an efficient alternative we develop a continuous analytical model of a pinhole SPECT system with a stationary detector set-up, which we apply to focused imaging of a mouse. The model assumes that the multi-pinhole collimator and the detector both have the shape of a spherical layer, and uses analytical expressions for effective pinhole diameters, sensitivity and spatial resolution. For fixed fields-of-view, a pinhole-diameter adapting feedback loop allows for the comparison of the system resolution of different systems at equal system sensitivity, and vice versa. The model predicts that (i) for optimal resolution or sensitivity the collimator layer with pinholes should be placed as closely as possible around the animal given a fixed detector layer, (ii) with high-resolution detectors a resolution improvement up to 31% can be achieved compared to optimized systems, (iii) high-resolution detectors can be placed close to the collimator without significant resolution losses, (iv) interestingly, systems with a physical pinhole diameter of 0 mm can have an excellent resolution when high-resolution detectors are used.

  4. Influence of model errors in optimal sensor placement

    NASA Astrophysics Data System (ADS)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  5. Model-based optimization of tapered free-electron lasers

    NASA Astrophysics Data System (ADS)

    Mak, Alan; Curbis, Francesca; Werin, Sverker

    2015-04-01

    The energy extraction efficiency is a figure of merit for a free-electron laser (FEL). It can be enhanced by the technique of undulator tapering, which enables the sustained growth of radiation power beyond the initial saturation point. In the development of a single-pass x-ray FEL, it is important to exploit the full potential of this technique and optimize the taper profile aw(z ). Our approach to the optimization is based on the theoretical model by Kroll, Morton, and Rosenbluth, whereby the taper profile aw(z ) is not a predetermined function (such as linear or exponential) but is determined by the physics of a resonant particle. For further enhancement of the energy extraction efficiency, we propose a modification to the model, which involves manipulations of the resonant particle's phase. Using the numerical simulation code GENESIS, we apply our model-based optimization methods to a case of the future FEL at the MAX IV Laboratory (Lund, Sweden), as well as a case of the LCLS-II facility (Stanford, USA).

  6. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  7. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  8. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Astrophysics Data System (ADS)

    Pina, R. K.; Puetter, R. C.

    1993-06-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  9. An optimization model for the US Air-Traffic System

    NASA Technical Reports Server (NTRS)

    Mulvey, J. M.

    1986-01-01

    A systematic approach for monitoring U.S. air traffic was developed in the context of system-wide planning and control. Towards this end, a network optimization model with nonlinear objectives was chosen as the central element in the planning/control system. The network representation was selected because: (1) it provides a comprehensive structure for depicting essential aspects of the air traffic system, (2) it can be solved efficiently for large scale problems, and (3) the design can be easily communicated to non-technical users through computer graphics. Briefly, the network planning models consider the flow of traffic through a graph as the basic structure. Nodes depict locations and time periods for either individual planes or for aggregated groups of airplanes. Arcs define variables as actual airplanes flying through space or as delays across time periods. As such, a special case of the network can be used to model the so called flow control problem. Due to the large number of interacting variables and the difficulty in subdividing the problem into relatively independent subproblems, an integrated model was designed which will depict the entire high level (above 29000 feet) jet route system for the 48 contiguous states in the U.S. As a first step in demonstrating the concept's feasibility a nonlinear risk/cost model was developed for the Indianapolis Airspace. The nonlinear network program --NLPNETG-- was employed in solving the resulting test cases. This optimization program uses the Truncated-Newton method (quadratic approximation) for determining the search direction at each iteration in the nonlinear algorithm. It was shown that aircraft could be re-routed in an optimal fashion whenever traffic congestion increased beyond an acceptable level, as measured by the nonlinear risk function.

  10. Optimization of atmospheric transport models on HPC platforms

    NASA Astrophysics Data System (ADS)

    de la Cruz, Raúl; Folch, Arnau; Farré, Pau; Cabezas, Javier; Navarro, Nacho; Cela, José María

    2016-12-01

    The performance and scalability of atmospheric transport models on high performance computing environments is often far from optimal for multiple reasons including, for example, sequential input and output, synchronous communications, work unbalance, memory access latency or lack of task overlapping. We investigate how different software optimizations and porting to non general-purpose hardware architectures improve code scalability and execution times considering, as an example, the FALL3D volcanic ash transport model. To this purpose, we implement the FALL3D model equations in the WARIS framework, a software designed from scratch to solve in a parallel and efficient way different geoscience problems on a wide variety of architectures. In addition, we consider further improvements in WARIS such as hybrid MPI-OMP parallelization, spatial blocking, auto-tuning and thread affinity. Considering all these aspects together, the FALL3D execution times for a realistic test case running on general-purpose cluster architectures (Intel Sandy Bridge) decrease by a factor between 7 and 40 depending on the grid resolution. Finally, we port the application to Intel Xeon Phi (MIC) and NVIDIA GPUs (CUDA) accelerator-based architectures and compare performance, cost and power consumption on all the architectures. Implications on time-constrained operational model configurations are discussed.

  11. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  12. Optimal Treatment Strategy for a Tumor Model under Immune Suppression

    PubMed Central

    Kim, Kwang Su; Cho, Giphil; Jung, Il Hyo

    2014-01-01

    We propose a mathematical model describing tumor-immune interactions under immune suppression. These days evidences indicate that the immune suppression related to cancer contributes to its progression. The mathematical model for tumor-immune interactions would provide a new methodology for more sophisticated treatment options of cancer. To do this we have developed a system of 11 ordinary differential equations including the movement, interaction, and activation of NK cells, CD8+T-cells, CD4+T cells, regulatory T cells, and dendritic cells under the presence of tumor and cytokines and the immune interactions. In addition, we apply two control therapies, immunotherapy and chemotherapy to the model in order to control growth of tumor. Using optimal control theory and numerical simulations, we obtain appropriate treatment strategies according to the ratio of the cost for two therapies, which suggest an optimal timing of each administration for the two types of models, without and with immunosuppressive effects. These results mean that the immune suppression can have an influence on treatment strategies for cancer. PMID:25140193

  13. Parallelism and optimization of numerical ocean forecasting model

    NASA Astrophysics Data System (ADS)

    Xu, Jianliang; Pang, Renbo; Teng, Junhua; Liang, Hongtao; Yang, Dandan

    2016-10-01

    According to the characteristics of Chinese marginal seas, the Marginal Sea Model of China (MSMC) has been developed independently in China. Because the model requires long simulation time, as a routine forecasting model, the parallelism of MSMC becomes necessary to be introduced to improve the performance of it. However, some methods used in MSMC, such as Successive Over Relaxation (SOR) algorithm, are not suitable for parallelism. In this paper, methods are developedto solve the parallel problem of the SOR algorithm following the steps as below. First, based on a 3D computing grid system, an automatic data partition method is implemented to dynamically divide the computing grid according to computing resources. Next, based on the characteristics of the numerical forecasting model, a parallel method is designed to solve the parallel problem of the SOR algorithm. Lastly, a communication optimization method is provided to avoid the cost of communication. In the communication optimization method, the non-blocking communication of Message Passing Interface (MPI) is used to implement the parallelism of MSMC with complex physical equations, and the process of communication is overlapped with the computations for improving the performance of parallel MSMC. The experiments show that the parallel MSMC runs 97.2 times faster than the serial MSMC, and root mean square error between the parallel MSMC and the serial MSMC is less than 0.01 for a 30-day simulation (172800 time steps), which meets the requirements of timeliness and accuracy for numerical ocean forecasting products.

  14. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  15. Parameter Optimization for the Gaussian Model of Folded Proteins

    NASA Astrophysics Data System (ADS)

    Erman, Burak; Erkip, Albert

    2000-03-01

    Recently, we proposed an analytical model of protein folding (B. Erman, K. A. Dill, J. Chem. Phys, 112, 000, 2000) and showed that this model successfully approximates the known minimum energy configurations of two dimensional HP chains. All attractions (covalent and non-covalent) as well as repulsions were treated as if the monomer units interacted with each other through linear spring forces. Since the governing potential of the linear springs are derived from a Gaussian potential, the model is called the ''Gaussian Model''. The predicted conformations from the model for the hexamer and various 9mer sequences all lie on the square lattice, although the model does not contain information about the lattice structure. Results of predictions for chains with 20 or more monomers also agreed well with corresponding known minimum energy lattice structures. However, these predicted conformations did not lie exactly on the square lattice. In the present work, we treat the specific problem of optimizing the potentials (the strengths of the spring constants) so that the predictions are in better agreement with the known minimum energy structures.

  16. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  17. Optimal Filtering in Mass Transport Modeling From Satellite Gravimetry Data

    NASA Astrophysics Data System (ADS)

    Ditmar, P.; Hashemi Farahani, H.; Klees, R.

    2011-12-01

    Monitoring natural mass transport in the Earth's system, which has marked a new era in Earth observation, is largely based on the data collected by the GRACE satellite mission. Unfortunately, this mission is not free from certain limitations, two of which are especially critical. Firstly, its sensitivity is strongly anisotropic: it senses the north-south component of the mass re-distribution gradient much better than the east-west component. Secondly, it suffers from a trade-off between temporal and spatial resolution: a high (e.g., daily) temporal resolution is only possible if the spatial resolution is sacrificed. To make things even worse, the GRACE satellites enter occasionally a phase when their orbit is characterized by a short repeat period, which makes it impossible to reach a high spatial resolution at all. A way to mitigate limitations of GRACE measurements is to design optimal data processing procedures, so that all available information is fully exploited when modeling mass transport. This implies, in particular, that an unconstrained model directly derived from satellite gravimetry data needs to be optimally filtered. In principle, this can be realized with a Wiener filter, which is built on the basis of covariance matrices of noise and signal. In practice, however, a compilation of both matrices (and, therefore, of the filter itself) is not a trivial task. To build the covariance matrix of noise in a mass transport model, it is necessary to start from a realistic model of noise in the level-1B data. Furthermore, a routine satellite gravimetry data processing includes, in particular, the subtraction of nuisance signals (for instance, associated with atmosphere and ocean), for which appropriate background models are used. Such models are not error-free, which has to be taken into account when the noise covariance matrix is constructed. In addition, both signal and noise covariance matrices depend on the type of mass transport processes under

  18. Model reduction for chemical kinetics: An optimization approach

    SciTech Connect

    Petzold, L.; Zhu, W.

    1999-04-01

    The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.

  19. Optimal control of CPR procedure using hemodynamic circulation model

    DOEpatents

    Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok

    2007-12-25

    A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.

  20. Optimization Model for Web Based Multimodal Interactive Simulations.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  1. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  2. H2-optimal control with generalized state-space models for use in control-structure optimization

    NASA Technical Reports Server (NTRS)

    Wette, Matt

    1991-01-01

    Several advances are provided solving combined control-structure optimization problems. The author has extended solutions from H2 optimal control theory to the use of generalized state space models. The generalized state space models preserve the sparsity inherent in finite element models and hence provide some promise for handling very large problems. Also, expressions for the gradient of the optimal control cost are derived which use the generalized state space models.

  3. Multi-Objective Optimization of the Tank Model

    NASA Astrophysics Data System (ADS)

    Tanakamaru, H.

    2002-12-01

    The Tank Model is a conceptual rainfall-runoff model developed by Sugawara, which has 16 parameters including 4 initial storage depths. In this study, parameter optimization of the Tank Model using the multi-objectives is investigated. The root mean square error and the root mean square of relative error of simulated daily runoff hydrograph, which show obvious trade-off relationship, are adopted as objective functions and these objectives are minimized under the constraint of the permitted water balance error. The classical weighting method is applied to obtain discrete Pareto optimal solutions of the multi-objective problem. The problem is converted into a single-objective problem by the weighting method. The SCE-UA single-objective global optimization algorithm (Duan et al., 1992) is applied here for solving the problem. Such a classical method is not suited to approximate the continuous Pareto space because many times of single-objective optimization are required (i.e. a huge number of function evaluations is required) to obtain a lot of discrete Pareto solutions. To overcome the difficulties, effective and efficient new approaches such as the MOCOM-UA method (Yapo et al., 1998) have been developed. Here, a new simple approach based on the random search algorithm is developed to approximate the entire Pareto space. In this approach, a large number of new parameter sets is generated randomly in parameter ranges formed by original discrete Pareto solutions and function evaluations of generated parameter sets are conducted. After removing solutions that do not satisfy constraints, non-dominated solutions (Pareto ranking 1) are selected from generated solutions and original discrete solutions. The calibration study was done by using hydrological data of the Eigenji Dam Basin, Japan and results show that combination of the weighting method and the random search algorithm is effective and efficient to approximate the entire Pareto space of the multi-objective problem.

  4. Optimized diagnostic model combination for improving diagnostic accuracy

    NASA Astrophysics Data System (ADS)

    Kunche, S.; Chen, C.; Pecht, M. G.

    Identifying the most suitable classifier for diagnostics is a challenging task. In addition to using domain expertise, a trial and error method has been widely used to identify the most suitable classifier. Classifier fusion can be used to overcome this challenge and it has been widely known to perform better than single classifier. Classifier fusion helps in overcoming the error due to inductive bias of various classifiers. The combination rule also plays a vital role in classifier fusion, and it has not been well studied which combination rules provide the best performance during classifier fusion. Good combination rules will achieve good generalizability while taking advantage of the diversity of the classifiers. In this work, we develop an approach for ensemble learning consisting of an optimized combination rule. The generalizability has been acknowledged to be a challenge for training a diverse set of classifiers, but it can be achieved by an optimal balance between bias and variance errors using the combination rule in this paper. Generalizability implies the ability of a classifier to learn the underlying model from the training data and to predict the unseen observations. In this paper, cross validation has been employed during performance evaluation of each classifier to get an unbiased performance estimate. An objective function is constructed and optimized based on the performance evaluation to achieve the optimal bias-variance balance. This function can be solved as a constrained nonlinear optimization problem. Sequential Quadratic Programming based optimization with better convergence property has been employed for the optimization. We have demonstrated the applicability of the algorithm by using support vector machine and neural networks as classifiers, but the methodology can be broadly applicable for combining other classifier algorithms as well. The method has been applied to the fault diagnosis of analog circuits. The performance of the proposed

  5. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework

    EPA Science Inventory

    Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...

  6. WE-D-BRE-04: Modeling Optimal Concurrent Chemotherapy Schedules

    SciTech Connect

    Jeong, J; Deasy, J O

    2014-06-15

    Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-kill was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation.

  7. Modeling and optimization of a hybrid solar combined cycle (HYCS)

    NASA Astrophysics Data System (ADS)

    Eter, Ahmad Adel

    2011-12-01

    The main objective of this thesis is to investigate the feasibility of integrating concentrated solar power (CSP) technology with the conventional combined cycle technology for electric generation in Saudi Arabia. The generated electricity can be used locally to meet the annual increasing demand. Specifically, it can be utilized to meet the demand during the hours 10 am-3 pm and prevent blackout hours, of some industrial sectors. The proposed CSP design gives flexibility in the operation system. Since, it works as a conventional combined cycle during night time and it switches to work as a hybrid solar combined cycle during day time. The first objective of the thesis is to develop a thermo-economical mathematical model that can simulate the performance of a hybrid solar-fossil fuel combined cycle. The second objective is to develop a computer simulation code that can solve the thermo-economical mathematical model using available software such as E.E.S. The developed simulation code is used to analyze the thermo-economic performance of different configurations of integrating the CSP with the conventional fossil fuel combined cycle to achieve the optimal integration configuration. This optimal integration configuration has been investigated further to achieve the optimal design of the solar field that gives the optimal solar share. Thermo-economical performance metrics which are available in the literature have been used in the present work to assess the thermo-economic performance of the investigated configurations. The economical and environmental impact of integration CSP with the conventional fossil fuel combined cycle are estimated and discussed. Finally, the optimal integration configuration is found to be solarization steam side in conventional combined cycle with solar multiple 0.38 which needs 29 hectare and LEC of HYCS is 63.17 $/MWh under Dhahran weather conditions.

  8. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization.

    PubMed

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach.

  9. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization

    PubMed Central

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448

  10. Image quality optimization using an x-ray spectra model-based optimization method

    NASA Astrophysics Data System (ADS)

    Gordon, Clarence L., III

    2000-04-01

    Several x-ray parameters must be optimized to deliver exceptional fluoroscopic and radiographic x-ray Image Quality (IQ) for the large variety of clinical procedures and patient sizes performed on a cardiac/vascular x-ray system. The optimal choice varies as a function of the objective of the medical exam, the patient size, local regulatory requirements, and the operational range of the system. As a result, many distinct combinations are required to successfully operate the x-ray system and meet the clinical imaging requirements. Presented here, is a new, configurable and automatic method to perform x-ray technique and IQ optimization using an x-ray spectral model based simulation of the x-ray generation and detection system. This method incorporates many aspects/requirements of the clinical environment, and a complete description of the specific x-ray system. First, the algorithm requires specific inputs: clinically relevant performance objectives, system hardware configuration, and system operational range. Second, the optimization is performed for a Primary Optimization Strategy versus patient thickness, e.g. maximum contrast. Finally, in the case where there are multiple operating points, which meet the Primary Optimization Strategy, a Secondary Optimization Strategy, e.g. to minimize patient dose, is utilized to determine the final set of optimal x-ray techniques.

  11. Modeling, hybridization, and optimal charging of electrical energy storage systems

    NASA Astrophysics Data System (ADS)

    Parvini, Yasha

    The rising rate of global energy demand alongside the dwindling fossil fuel resources has motivated research for alternative and sustainable solutions. Within this area of research, electrical energy storage systems are pivotal in applications including electrified vehicles, renewable power generation, and electronic devices. The approach of this dissertation is to elucidate the bottlenecks of integrating supercapacitors and batteries in energy systems and propose solutions by the means of modeling, control, and experimental techniques. In the first step, the supercapacitor cell is modeled in order to gain fundamental understanding of its electrical and thermal dynamics. The dependence of electrical parameters on state of charge (SOC), current direction and magnitude (20-200 A), and temperatures ranging from -40°C to 60°C was embedded in this computationally efficient model. The coupled electro-thermal model was parameterized using specifically designed temporal experiments and then validated by the application of real world duty cycles. Driving range is one of the major challenges of electric vehicles compared to combustion vehicles. In order to shed light on the benefits of hybridizing a lead-acid driven electric vehicle via supercapacitors, a model was parameterized for the lead-acid battery and combined with the model already developed for the supercapacitor, to build the hybrid battery-supercapacitor model. A hardware in the loop (HIL) setup consisting of a custom built DC/DC converter, micro-controller (muC) to implement the power management strategy, 12V lead-acid battery, and a 16.2V supercapacitor module was built to perform the validation experiments. Charging electrical energy storage systems in an efficient and quick manner, motivated to solve an optimal control problem with the objective of maximizing the charging efficiency for supercapacitors, lead-acid, and lithium ion batteries. Pontryagins minimum principle was used to solve the problems

  12. Ultradiscrete optimal velocity model: a cellular-automaton model for traffic flow and linear instability of high-flux traffic.

    PubMed

    Kanai, Masahiro; Isojima, Shin; Nishinari, Katsuhiro; Tokihiro, Tetsuji

    2009-05-01

    In this paper, we propose the ultradiscrete optimal velocity model, a cellular-automaton model for traffic flow, by applying the ultradiscrete method for the optimal velocity model. The optimal velocity model, defined by a differential equation, is one of the most important models; in particular, it successfully reproduces the instability of high-flux traffic. It is often pointed out that there is a close relation between the optimal velocity model and the modified Korteweg-de Vries (mkdV) equation, a soliton equation. Meanwhile, the ultradiscrete method enables one to reduce soliton equations to cellular automata which inherit the solitonic nature, such as an infinite number of conservation laws, and soliton solutions. We find that the theory of soliton equations is available for generic differential equations and the simulation results reveal that the model obtained reproduces both absolutely unstable and convectively unstable flows as well as the optimal velocity model.

  13. Multisource modeling of flattening filter free (FFF) beam and the optimization of model parameters

    PubMed Central

    Cho, Woong; Kielar, Kayla N.; Mok, Ed; Xing, Lei; Park, Jeong-Hoon; Jung, Won-Gyun; Suh, Tae-Suk

    2011-01-01

    Purpose: With the introduction of flattening filter free (FFF) linear accelerators to radiation oncology, new analytical source models for a FFF beam applicable to current treatment planning systems is needed. In this work, a multisource model for the FFF beam and the optimization of involved model parameters were designed. Methods: The model is based on a previous three source model proposed by Yang [“A three-source model for the calculation of head scatter factors,” Med. Phys. 29, 2024–2033 (2002)]. An off axis ratio (OAR) of photon fluence was introduced to the primary source term to generate cone shaped profiles. The parameters of the source model were determined from measured head scatter factors using a line search optimization technique. The OAR of the photon fluence was determined from a measured dose profile of a 40×40 cm2 field size with the same optimization technique, but a new method to acquire gradient terms for OARs was developed to enhance the speed of the optimization process. The improved model was validated with measured dose profiles from 3×3 to 40×40 cm2 field sizes at 6 and 10 MV from a TrueBeam™ STx linear accelerator. Furthermore, planar dose distributions for clinically used radiation fields were also calculated and compared to measurements using a 2D array detector using the gamma index method. Results: All dose values for the calculated profiles agreed with the measured dose profiles within 0.5% at 6 and 10 MV beams, except for some low dose regions for larger field sizes. A slight overestimation was seen in the lower penumbra region near the field edge for the large field sizes by 1%–4%. The planar dose calculations showed comparable passing rates (>98%) when the criterion of the gamma index method was selected to be 3%∕3 mm. Conclusions: The developed source model showed good agreements between measured and calculated dose distributions. The model is easily applicable to any other linear accelerator using FFF beams as the

  14. Optimal aeroassisted coplanar orbital transfer using an energy model

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Taylor, Deborah B.

    1989-01-01

    The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.

  15. Optimizing nanomedicine pharmacokinetics using physiologically based pharmacokinetics modelling.

    PubMed

    Moss, Darren Michael; Siccardi, Marco

    2014-09-01

    The delivery of therapeutic agents is characterized by numerous challenges including poor absorption, low penetration in target tissues and non-specific dissemination in organs, leading to toxicity or poor drug exposure. Several nanomedicine strategies have emerged as an advanced approach to enhance drug delivery and improve the treatment of several diseases. Numerous processes mediate the pharmacokinetics of nanoformulations, with the absorption, distribution, metabolism and elimination (ADME) being poorly understood and often differing substantially from traditional formulations. Understanding how nanoformulation composition and physicochemical properties influence drug distribution in the human body is of central importance when developing future treatment strategies. A helpful pharmacological tool to simulate the distribution of nanoformulations is represented by physiologically based pharmacokinetics (PBPK) modelling, which integrates system data describing a population of interest with drug/nanoparticle in vitro data through a mathematical description of ADME. The application of PBPK models for nanomedicine is in its infancy and characterized by several challenges. The integration of property-distribution relationships in PBPK models may benefit nanomedicine research, giving opportunities for innovative development of nanotechnologies. PBPK modelling has the potential to improve our understanding of the mechanisms underpinning nanoformulation disposition and allow for more rapid and accurate determination of their kinetics. This review provides an overview of the current knowledge of nanomedicine distribution and the use of PBPK modelling in the characterization of nanoformulations with optimal pharmacokinetics.

  16. Endovascular magnetically guided robots: navigation modeling and optimization.

    PubMed

    Arcese, Laurent; Fruchard, Matthieu; Ferreira, Antoine

    2012-04-01

    This paper deals with the benefits of using a nonlinear model-based approach for controlling magnetically guided therapeutic microrobots in the cardiovascular system. Such robots used for minimally invasive interventions consist of a polymer binded aggregate of nanosized ferromagnetic particles functionalized by drug-conjugated micelles. The proposed modeling addresses wall effects (blood velocity in minor and major vessels' bifurcations, pulsatile blood flow and vessel walls, and effect of robot-to-vessel diameter ratio), wall interactions (contact, van der Waals, electrostatic, and steric forces), non-Newtonian behavior of blood, and different driving designs as well. Despite nonlinear and thorough, the resulting model can both be exploited to improve the targeting ability and be controlled in closed-loop using nonlinear control theory tools. In particular, we infer from the model an optimization of both the designs and the reference trajectory to minimize the control efforts. Efficiency and robustness to noise and model parameter's uncertainties are then illustrated through simulations results for a bead pulled robot of radius 250 μm in a small artery.

  17. Optimal allocation of computational resources in hydrogeological models under uncertainty

    NASA Astrophysics Data System (ADS)

    Moslehi, Mahsa; Rajagopal, Ram; de Barros, Felipe P. J.

    2015-09-01

    Flow and transport models in heterogeneous geological formations are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting subsurface flow and transport often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field parameter representing hydrogeological characteristics of the aquifer. The physical resolution (e.g. spatial grid resolution) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We develop an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model prediction and physical errors corresponding to numerical grid resolution. Computational resources are allocated by considering the overall error based on a joint statistical-numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The performance of the framework is tested against computationally extensive simulations of flow and transport in spatially heterogeneous aquifers. Results show that modelers can achieve optimum physical and statistical resolutions while keeping a minimum error for a given computational time. The physical and statistical resolutions obtained through our analysis yield lower computational costs when compared to the results obtained with prevalent recommendations in the literature. Lastly, we highlight the significance of the geometrical characteristics of the contaminant source zone on the

  18. Monte Carlo modeling and optimization of buffer gas positron traps

    NASA Astrophysics Data System (ADS)

    Marjanović, Srđan; Petrović, Zoran Lj

    2017-02-01

    Buffer gas positron traps have been used for over two decades as the prime source of slow positrons enabling a wide range of experiments. While their performance has been well understood through empirical studies, no theoretical attempt has been made to quantitatively describe their operation. In this paper we apply standard models as developed for physics of low temperature collision dominated plasmas, or physics of swarms to model basic performance and principles of operation of gas filled positron traps. The Monte Carlo model is equipped with the best available set of cross sections that were mostly derived experimentally by using the same type of traps that are being studied. Our model represents in realistic geometry and fields the development of the positron ensemble from the initial beam provided by the solid neon moderator through voltage drops between the stages of the trap and through different pressures of the buffer gas. The first two stages employ excitation of N2 with acceleration of the order of 10 eV so that the trap operates under conditions when excitation of the nitrogen reduces the energy of the initial beam to trap the positrons without giving them a chance to become annihilated following positronium formation. The energy distribution function develops from the assumed distribution leaving the moderator, it is accelerated by the voltage drops and forms beams at several distinct energies. In final stages the low energy loss collisions (vibrational excitation of CF4 and rotational excitation of N2) control the approach of the distribution function to a Maxwellian at room temperature but multiple non-Maxwellian groups persist throughout most of the thermalization. Optimization of the efficiency of the trap may be achieved by changing the pressure and voltage drops and also by selecting to operate in a two stage mode. The model allows quantitative comparisons and test of optimization as well as development of other properties.

  19. Multi-model groundwater-management optimization: reconciling disparate conceptual models

    NASA Astrophysics Data System (ADS)

    Timani, Bassel; Peralta, Richard

    2015-09-01

    Disagreement among policymakers often involves policy issues and differences between the decision makers' implicit utility functions. Significant disagreement can also exist concerning conceptual models of the physical system. Disagreement on the validity of a single simulation model delays discussion on policy issues and prevents the adoption of consensus management strategies. For such a contentious situation, the proposed multi-conceptual model optimization (MCMO) can help stakeholders reach a compromise strategy. MCMO computes mathematically optimal strategies that simultaneously satisfy analogous constraints and bounds in multiple numerical models that differ in boundary conditions, hydrogeologic stratigraphy, and discretization. Shadow prices and trade-offs guide the process of refining the first MCMO-developed `multi-model strategy into a realistic compromise management strategy. By employing automated cycling, MCMO is practical for linear and nonlinear aquifer systems. In this reconnaissance study, MCMO application to the multilayer Cache Valley (Utah and Idaho, USA) river-aquifer system employs two simulation models with analogous background conditions but different vertical discretization and boundary conditions. The objective is to maximize additional safe pumping (beyond current pumping), subject to constraints on groundwater head and seepage from the aquifer to surface waters. MCMO application reveals that in order to protect the local ecosystem, increased groundwater pumping can satisfy only 40 % of projected water demand increase. To explore the possibility of increasing that pumping while protecting the ecosystem, MCMO clearly identifies localities requiring additional field data. MCMO is applicable to other areas and optimization problems than used here. Steps to prepare comparable sub-models for MCMO use are area-dependent.

  20. Modelling the Fermilab Collider to determine optimal running

    SciTech Connect

    McCrory, E.

    1994-12-01

    A Monte Carlo-type model of the Fermilab Collider has been constructed, the goal of which is to accurately represent the operation of the Collider, incorporating the aspects of the facility which affect operations in order to determine how to run optimally. In particular, downtime for the various parts of the complex are parameterized and included. Also, transfer efficiencies, emittance growths, changes in the luminosity lifetime and other effects are included and randomized in a reasonable manner. This Memo is an outgrowth of TM-1878, which presented an entirely analytical model of the Collider. It produced a framework for developing intuition on the way in which the major components of the collider affect the luminosity, like the stacking rate and the shot set-up time, for example. However, without accurately including downtime effects, it is not possible to say with certainty that the analytical approach can produce accurate guidelines for optimizing the performance of the Collider. This is the goal of this analysis. We first discuss the way the model is written, describing the object-oriented approach taken in C++. The parameters of the simulation are described. Then the potential criteria for ending stores are described and analyzed. Next, a typical store and a typical week are derived. Then, a final conclusion on the best end-of-store criterion is made. Finally, ideas for future analysis are presented.

  1. Stochastic optimization algorithm for inverse modeling of air pollution

    NASA Astrophysics Data System (ADS)

    Yeo, Kyongmin; Hwang, Youngdeok; Liu, Xiao; Kalagnanam, Jayant

    2016-11-01

    A stochastic optimization algorithm to estimate a smooth source function from a limited number of observations is proposed in the context of air pollution, where the source-receptor relation is given by an advection-diffusion equation. First, a smooth source function is approximated by a set of Gaussian kernels on a rectangular mesh system. Then, the generalized polynomial chaos (gPC) expansion is used to represent the model uncertainty due to the choice of the mesh system. It is shown that the convolution of gPC basis and the Gaussian kernel provides hierarchical basis functions for a spectral function estimation. The spectral inverse model is formulated as a stochastic optimization problem. We propose a regularization strategy based on the hierarchical nature of the basis polynomials. It is shown that the spectral inverse model is capable of providing a good estimate of the source function even when the number of unknown parameters (m) is much larger the number of data (n), m/n > 50.

  2. The role of optimization in structural model refinement

    NASA Technical Reports Server (NTRS)

    Lehman, L. L.

    1984-01-01

    To evaluate the role that optimization can play in structural model refinement, it is necessary to examine the existing environment for the structural design/structural modification process. The traditional approach to design, analysis, and modification is illustrated. Typically, a cyclical path is followed in evaluating and refining a structural system, with parallel paths existing between the real system and the analytical model of the system. The major failing of the existing approach is the rather weak link of communication between the cycle for the real system and the cycle for the analytical model. Only at the expense of much human effort can data sharing and comparative evaluation be enhanced for the two parallel cycles. Much of the difficulty can be traced to the lack of a user-friendly, rapidly reconfigurable engineering software environment for facilitating data and information exchange. Until this type of software environment becomes readily available to the majority of the engineering community, the role of optimization will not be able to reach its full potential and engineering productivity will continue to suffer. A key issue in current engineering design, analysis, and test is the definition and development of an integrated engineering software support capability. The data and solution flow for this type of integrated engineering analysis/refinement system is shown.

  3. Model-based optimal planning of hepatic radiofrequency ablation.

    PubMed

    Chen, Qiyong; Müftü, Sinan; Meral, Faik Can; Tuncali, Kemal; Akçakaya, Murat

    2016-07-19

    This article presents a model-based pre-treatment optimal planning framework for hepatic tumour radiofrequency (RF) ablation. Conventional hepatic radiofrequency (RF) ablation methods rely on pre-specified input voltage and treatment length based on the tumour size. Using these experimentally obtained pre-specified treatment parameters in RF ablation is not optimal to achieve the expected level of cell death and usually results in more healthy tissue damage than desired. In this study we present a pre-treatment planning framework that provides tools to control the levels of both the healthy tissue preservation and tumour cell death. Over the geometry of tumour and surrounding tissue, we formulate the RF ablation planning as a constrained optimization problem. With specific constraints over the temperature profile (TP) in pre-determined areas of the target geometry, we consider two different cost functions based on the history of the TP and Arrhenius index (AI) of the target location, respectively. We optimally compute the input voltage variation to minimize the damage to the healthy tissue while ensuring a complete cell death in the tumour and immediate area covering the tumour. As an example, we use a simulation of a 1D symmetric target geometry mimicking the application of single electrode RF probe. Results demonstrate that compared to the conventional methods both cost functions improve the healthy tissue preservation.

  4. Prehension synergies during nonvertical grasping, II: Modeling and optimization.

    PubMed

    Pataky, Todd C; Latash, Mark L; Zatsiorsky, Vladimir M

    2004-10-01

    This study examines various optimization criteria as potential sources of constraints that eliminate (or at least reduce the degree of) mechanical redundancy in prehension. A model of nonvertical grasping mimicking the experimental conditions of Pataky et al. (current issue) was developed and numerically optimized. Several cost functions compared well with experimental data including energylike functions, entropylike functions, and a ''motor command'' function. A tissue deformation function failed to predict finger forces. In the prehension literature, the ''safety margin'' (SM) measure has been used to describe grasp quality. We demonstrate here that the SM is an inappropriate measure for nonvertical grasps. We introduce a new measure, the ''generalized safety margin'' (GSM), which reduces to the SM for vertical and two-digit grasps. It was found that a close-to-constant GSM accounts for many of the finger force patterns that are observed when grasping an object oriented arbitrarily with respect to the gravity field. It was hypothesized that, when determining finger forces, the CNS assumes that a grasped object is more slippery than it actually is. An ''operative friction coefficient'' of approximately 30% of the actual coefficient accounted for the offset between experimental and optimized data. The data suggest that the CNS utilizes an optimization strategy when coordinating finger forces during grasping.

  5. Swimming simply: Minimal models and stroke optimization for biological systems

    NASA Astrophysics Data System (ADS)

    Burton, Lisa; Guasto, Jeffrey S.; Stocker, Roman; Hosoi, A. E.

    2012-11-01

    In this talk, we examine how to represent the kinematics of swimming biological systems. We present a new method of extracting optimal curvature-space basis modes from high-speed video microscopy images of motile spermatozoa by tracking their flagellar kinematics. Using as few as two basis modes to characterize the swimmer's shape, we apply resistive force theory to build a model and predict the swimming speed and net translational and rotational displacement of a sperm cell over any given stroke. This low-order representation of motility yields a complete visualization of the system dynamics. The visualization tools provide refined initialization and intuition for global stroke optimization and improve motion planning by taking advantage of symmetries in the shape space to design a stroke that produces a desired net motion. Comparing the predicted optimal strokes to those observed experimentally enables us to rationalize biological motion by identifying possible optimization goals of the organism. This approach is applicable to a wide array of systems at both low and high Reynolds numbers. Battelle Memorial Institute and NSF.

  6. Using Cotton Model Simulations to Estimate Optimally Profitable Irrigation Strategies

    NASA Astrophysics Data System (ADS)

    Mauget, S. A.; Leiker, G.; Sapkota, P.; Johnson, J.; Maas, S.

    2011-12-01

    In recent decades irrigation pumping from the Ogallala Aquifer has led to declines in saturated thickness that have not been compensated for by natural recharge, which has led to questions about the long-term viability of agriculture in the cotton producing areas of west Texas. Adopting irrigation management strategies that optimize profitability while reducing irrigation waste is one way of conserving the aquifer's water resource. Here, a database of modeled cotton yields generated under drip and center pivot irrigated and dryland production scenarios is used in a stochastic dominance analysis that identifies such strategies under varying commodity price and pumping cost conditions. This database and analysis approach will serve as the foundation for a web-based decision support tool that will help producers identify optimal irrigation treatments under specified cotton price, electricity cost, and depth to water table conditions.

  7. Vibroacoustic optimization using a statistical energy analysis model

    NASA Astrophysics Data System (ADS)

    Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia

    2016-08-01

    In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.

  8. Design Oriented Structural Modeling for Airplane Conceptual Design Optimization

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.

  9. Proficient brain for optimal performance: the MAP model perspective

    PubMed Central

    di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557

  10. Comment on ``Analysis of optimal velocity model with explicit delay''

    NASA Astrophysics Data System (ADS)

    Davis, L. C.

    2002-09-01

    The effect of including an explicit delay time (due to driver reaction) on the optimal velocity model is studied. For a platoon of vehicles to avoid collisions, many-vehicle simulations demonstrate that delay times must be well below the critical delay time determined by a linear analysis for the response of a single vehicle. Safe platoons require rather small delay times, substantially smaller than typical reaction times of drivers. The present results do not support the conclusion of Bando et al. [M. Bando, K. Hasebe, K. Nakanishi, and A. Nakayama, Phys. Rev. E 58, 5429 (1998)] that explicit delay plays no essential role.

  11. Comment on "Analysis of optimal velocity model with explicit delay".

    PubMed

    Davis, L C

    2002-09-01

    The effect of including an explicit delay time (due to driver reaction) on the optimal velocity model is studied. For a platoon of vehicles to avoid collisions, many-vehicle simulations demonstrate that delay times must be well below the critical delay time determined by a linear analysis for the response of a single vehicle. Safe platoons require rather small delay times, substantially smaller than typical reaction times of drivers. The present results do not support the conclusion of Bando et al. [M. Bando, K. Hasebe, K. Nakanishi, and A. Nakayama, Phys. Rev. E 58, 5429 (1998)] that explicit delay plays no essential role.

  12. Particle Swarm Optimization with Watts-Strogatz Model

    NASA Astrophysics Data System (ADS)

    Zhu, Zhuanghua

    Particle swarm optimization (PSO) is a popular swarm intelligent methodology by simulating the animal social behaviors. Recent study shows that this type of social behaviors is a complex system, however, for most variants of PSO, all individuals lie in a fixed topology, and conflict this natural phenomenon. Therefore, in this paper, a new variant of PSO combined with Watts-Strogatz small-world topology model, called WSPSO, is proposed. In WSPSO, the topology is changed according to Watts-Strogatz rules within the whole evolutionary process. Simulation results show the proposed algorithm is effective and efficient.

  13. Numerical Modeling and Optimization of Warm-water Heat Sinks

    NASA Astrophysics Data System (ADS)

    Hadad, Yaser; Chiarot, Paul

    2015-11-01

    For cooling in large data-centers and supercomputers, water is increasingly replacing air as the working fluid in heat sinks. Utilizing water provides unique capabilities; for example: higher heat capacity, Prandtl number, and convection heat transfer coefficient. The use of warm, rather than chilled, water has the potential to provide increased energy efficiency. The geometric and operating parameters of the heat sink govern its performance. Numerical modeling is used to examine the influence of geometry and operating conditions on key metrics such as thermal and flow resistance. This model also facilitates studies on cooling of electronic chip hot spots and failure scenarios. We report on the optimal parameters for a warm-water heat sink to achieve maximum cooling performance.

  14. Mathematical programming models for determining the optimal location of beehives.

    PubMed

    Gavina, Maica Krizna A; Rabajante, Jomar F; Cervancia, Cleofas R

    2014-05-01

    Farmers frequently decide where to locate the colonies of their domesticated eusocial bees, especially given the following mutually exclusive scenarios: (i) there are limited nectar and pollen sources within the vicinity of the apiary that cause competition among foragers; and (ii) there are fewer pollinators compared to the number of inflorescence that may lead to suboptimal pollination of crops. We hypothesize that optimally distributing the beehives in the apiary can help address the two scenarios stated above. In this paper, we develop quantitative models (specifically using linear programming) for addressing the two given scenarios. We formulate models involving the following factors: (i) fuzzy preference of the beekeeper; (ii) number of available colonies; (iii) unknown-but-bounded strength of colonies; (iv) probabilistic carrying capacity of the plant clusters; and (v) spatial orientation of the apiary.

  15. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies.

  16. Recent developments in equivalent plate modeling for wing shape optimization

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1993-01-01

    A new technique for structural modeling of airplane wings is presented taking transverse shear effects into account. The kinematic assumptions of first order shear deformation plate theory in combination with numerical analysis based on simple polynomials which define geometry, construction and displacement approximations lead to analytical expressions for elements of the stiffness and mass matrices and load vector. Contributions from the cover skins, spar and rib caps and spar and rib webs are included as well as concentrated springs and concentrated masses. Limitations of current equivalent plate wing modeling techniques based on classical plate theory are discussed, and the improved accuracy of the new equivalent plate technique is demonstrated through comparison to finite element analysis and test results. Analytical derivatives of stiffness, mass and load terms with respect to wing shape lead to analytic sensitivities of displacements, stresses and natural modes with respect to planform shape and depth distribution. This makes the new capability an effective structural tool for wing shape optimization.

  17. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection

    PubMed Central

    Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-01-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  18. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    PubMed

    Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-09-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  19. Three essays on multi-level optimization models and applications

    NASA Astrophysics Data System (ADS)

    Rahdar, Mohammad

    The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation

  20. Computer model for characterizing, screening, and optimizing electrolyte systems

    SciTech Connect

    Gering, Kevin L.

    2015-06-15

    Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.

  1. Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael

    2016-01-01

    It has been well documented that decisions made in the early stages of Conceptual and Pre-Conceptual design commit up to 80% of total Life-Cycle Cost (LCC) while engineers know the least about the product they are designing [1]. Once within Preliminary and Detailed design however, making changes to the design becomes far more difficult to enact in both cost and schedule. Primarily this has been due to a lack of detailed data usually uncovered later during the Preliminary and Detailed design phases. In our current budget-constrained environment, making decisions within Conceptual and Pre-Conceptual design which minimize LCC while meeting requirements is paramount to a program's success. Within the arena of launch vehicle design, optimizing the ascent trajectory is critical for minimizing the costs present within such concerns as propellant, aerodynamic, aeroheating, and acceleration loads while meeting requirements such as payload delivered to a desired orbit. In order to optimize the vehicle design its constraints and requirements must be known, however as the design cycle proceeds it is all but inevitable that the conditions will change. Upon that change, the previously optimized trajectory may no longer be optimal, or meet design requirements. The current paradigm for adjusting to these updates is generating point solutions for every change in the design's requirements [2]. This can be a tedious, time-consuming task as changes in virtually any piece of a launch vehicle's design can have a disproportionately large effect on the ascent trajectory, as the solution space of the trajectory optimization problem is both non-linear and multimodal [3]. In addition, an industry standard tool, Program to Optimize Simulated Trajectories (POST), requires an expert analyst to produce simulated trajectories that are feasible and optimal [4]. In a previous publication the authors presented a method for combatting these challenges [5]. In order to bring more detailed information

  2. 20nm CMP model calibration with optimized metrology data and CMP model applications

    NASA Astrophysics Data System (ADS)

    Katakamsetty, Ushasree; Koli, Dinesh; Yeo, Sky; Hui, Colin; Ghulghazaryan, Ruben; Aytuna, Burak; Wilson, Jeff

    2015-03-01

    Chemical Mechanical Polishing (CMP) is the essential process for planarization of wafer surface in semiconductor manufacturing. CMP process helps to produce smaller ICs with more electronic circuits improving chip speed and performance. CMP also helps to increase throughput and yield, which results in reduction of IC manufacturer's total production costs. CMP simulation model will help to early predict CMP manufacturing hotspots and minimize the CMP and CMP induced Lithography and Etch defects [2]. In the advanced process nodes, conventional dummy fill insertion for uniform density is not able to address all the CMP short-range, long-range, multi-layer stacking and other effects like pad conditioning, slurry selectivity, etc. In this paper, we present the flow for 20nm CMP modeling using Mentor Graphics CMP modeling tools to build a multilayer Cu-CMP model and study hotspots. We present the inputs required for good CMP model calibration, challenges faced with metrology collections and techniques to optimize the wafer cost. We showcase the CMP model validation results and the model applications to predict multilayer topography accumulation affects for hotspot detection. We provide the flow for early detection of CMP hotspots with Calibre CMPAnalyzer to improve Design-for-Manufacturability (DFM) robustness.

  3. Optimizing Crawler4j using MapReduce Programming Model

    NASA Astrophysics Data System (ADS)

    Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.

    2016-08-01

    World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.

  4. Modeling marine surface microplastic transport to assess optimal removal locations

    NASA Astrophysics Data System (ADS)

    Sherman, Peter; van Sebille, Erik

    2016-01-01

    Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the impact on ecosystems, using plankton growth as a proxy. The simulations show that the optimal removal locations are primarily located off the coast of China and in the Indonesian Archipelago for both scenarios. Our estimates show that 31% of the modeled microplastic mass can be removed by 2025 using 29 plastic collectors operating at a 45% capture efficiency from these locations, compared to only 17% when the 29 plastic collectors are moored in the North Pacific garbage patch, between Hawaii and California. The overlap of ocean surface microplastics and phytoplankton growth can be reduced by 46% at our proposed locations, while sinks in the North Pacific can only reduce the overlap by 14%. These results are an indication that oceanic plastic removal might be more effective in removing a greater microplastic mass and in reducing potential harm to marine life when closer to shore than inside the plastic accumulation zones in the centers of the gyres.

  5. A new mathematical model in space optimization: A case study

    NASA Astrophysics Data System (ADS)

    Abdullah, Kamilah; Kamis, Nor Hanimah; Sha'ari, Nor Shahida; Muhammad Halim, Nurul Suhada; Hashim, Syaril Naqiah

    2013-04-01

    Most of higher education institutions provide certain area known as learning centre for their students to study or having group discussions. However, some of the learning centers are not provided by optimum number of tables and seats to accommodate the students sufficiently. This study proposed a new mathematical model in optimizing the number of tables and seats at Laman Najib, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA (UiTM) Shah Alam. An improvement of space capacity with maximum number of students who can facilitate the Laman Najib at the same time has been made by considering the type and size of tables that are appropriate for student's discussions. Our finding is compared with the result of Simplex method of linear programming to ensure that our new model is valid and consistent with other existing approaches. As a conclusion, we found that the round-type tables with six seats provide the maximum number of students who can use Laman Najib for their discussions or group studying. Both methods are also practical to use as alternative approaches in solving other space optimization problems.

  6. Model-driven optimization of multicomponent self-assembly processes.

    PubMed

    Korevaar, Peter A; Grenier, Christophe; Markvoort, Albert J; Schenning, Albertus P H J; de Greef, Tom F A; Meijer, E W

    2013-10-22

    Here, we report an engineering approach toward multicomponent self-assembly processes by developing a methodology to circumvent spurious, metastable assemblies. The formation of metastable aggregates often hampers self-assembly of molecular building blocks into the desired nanostructures. Strategies are explored to master the pathway complexity and avoid off-pathway aggregates by optimizing the rate of assembly along the correct pathway. We study as a model system the coassembly of two monomers, the R- and S-chiral enantiomers of a π-conjugated oligo(p-phenylene vinylene) derivative. Coassembly kinetics are analyzed by developing a kinetic model, which reveals the initial assembly of metastable structures buffering free monomers and thereby slows the formation of thermodynamically stable assemblies. These metastable assemblies exert greater influence on the thermodynamically favored self-assembly pathway if the ratio between both monomers approaches 1:1, in agreement with experimental results. Moreover, competition by metastable assemblies is highly temperature dependent and hampers the assembly of equilibrium nanostructures most effectively at intermediate temperatures. We demonstrate that the rate of the assembly process may be optimized by tuning the cooling rate. Finally, it is shown by simulation that increasing the driving force for assembly stepwise by changing the solvent composition may circumvent metastable pathways and thereby force the assembly process directly into the correct pathway.

  7. Modeling and multidimensional optimization of a tapered free electron laser

    NASA Astrophysics Data System (ADS)

    Jiao, Y.; Wu, J.; Cai, Y.; Chao, A. W.; Fawley, W. M.; Frisch, J.; Huang, Z.; Nuhn, H.-D.; Pellegrini, C.; Reiche, S.

    2012-05-01

    Energy extraction efficiency of a free electron laser (FEL) can be greatly increased using a tapered undulator and self-seeding. However, the extraction rate is limited by various effects that eventually lead to saturation of the peak intensity and power. To better understand these effects, we develop a model extending the Kroll-Morton-Rosenbluth, one-dimensional theory to include the physics of diffraction, optical guiding, and radially resolved particle trapping. The predictions of the model agree well with that of the GENESIS single-frequency numerical simulations. In particular, we discuss the evolution of the electron-radiation interaction along the tapered undulator and show that the decreasing of refractive guiding is the major cause of the efficiency reduction, particle detrapping, and then saturation of the radiation power. With this understanding, we develop a multidimensional optimization scheme based on GENESIS simulations to increase the energy extraction efficiency via an improved taper profile and variation in electron beam radius. We present optimization results for hard x-ray tapered FELs, and the dependence of the maximum extractable radiation power on various parameters of the initial electron beam, radiation field, and the undulator system. We also study the effect of the sideband growth in a tapered FEL. Such growth induces increased particle detrapping and thus decreased refractive guiding that together strongly limit the overall energy extraction efficiency.

  8. Optimal spatiotemporal reduced order modeling for nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    LaBryer, Allen

    Proposed in this dissertation is a novel reduced order modeling (ROM) framework called optimal spatiotemporal reduced order modeling (OPSTROM) for nonlinear dynamical systems. The OPSTROM approach is a data-driven methodology for the synthesis of multiscale reduced order models (ROMs) which can be used to enhance the efficiency and reliability of under-resolved simulations for nonlinear dynamical systems. In the context of nonlinear continuum dynamics, the OPSTROM approach relies on the concept of embedding subgrid-scale models into the governing equations in order to account for the effects due to unresolved spatial and temporal scales. Traditional ROMs neglect these effects, whereas most other multiscale ROMs account for these effects in ways that are inconsistent with the underlying spatiotemporal statistical structure of the nonlinear dynamical system. The OPSTROM framework presented in this dissertation begins with a general system of partial differential equations, which are modified for an under-resolved simulation in space and time with an arbitrary discretization scheme. Basic filtering concepts are used to demonstrate the manner in which residual terms, representing subgrid-scale dynamics, arise with a coarse computational grid. Models for these residual terms are then developed by accounting for the underlying spatiotemporal statistical structure in a consistent manner. These subgrid-scale models are designed to provide closure by accounting for the dynamic interactions between spatiotemporal macroscales and microscales which are otherwise neglected in a ROM. For a given resolution, the predictions obtained with the modified system of equations are optimal (in a mean-square sense) as the subgrid-scale models are based upon principles of mean-square error minimization, conditional expectations and stochastic estimation. Methods are suggested for efficient model construction, appraisal, error measure, and implementation with a couple of well-known time

  9. Optimizing cardiovascular benefits of exercise: a review of rodent models.

    PubMed

    Davis, Brittany; Moriguchi, Takeshi; Sumpio, Bauer

    2013-03-01

    Although research unanimously maintains that exercise can ward off cardiovascular disease (CVD), the optimal type, duration, intensity, and combination of forms are yet not clear. In our review of existing rodent-based studies on exercise and cardiovascular health, we attempt to find the optimal forms, intensities, and durations of exercise. Using Scopus and Medline, a literature review of English language comparative journal studies of cardiovascular benefits and exercise was performed. This review examines the existing literature on rodent models of aerobic, anaerobic, and power exercise and compares the benefits of various training forms, intensities, and durations. The rodent studies reviewed in this article correlate with reports on human subjects that suggest regular aerobic exercise can improve cardiac and vascular structure and function, as well as lipid profiles, and reduce the risk of CVD. Findings demonstrate an abundance of rodent-based aerobic studies, but a lack of anaerobic and power forms of exercise, as well as comparisons of these three components of exercise. Thus, further studies must be conducted to determine a truly optimal regimen for cardiovascular health.

  10. Optimization of Forward Wave Modeling on Contemporary HPC Architectures

    SciTech Connect

    Krueger, Jens; Micikevicius, Paulius; Williams, Samuel

    2012-07-20

    Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization for TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.

  11. Multi-level systems modeling and optimization for novel aircraft

    NASA Astrophysics Data System (ADS)

    Subramanian, Shreyas Vathul

    This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission

  12. Tool Steel Heat Treatment Optimization Using Neural Network Modeling

    NASA Astrophysics Data System (ADS)

    Podgornik, Bojan; Belič, Igor; Leskovšek, Vojteh; Godec, Matjaz

    2016-11-01

    Optimization of tool steel properties and corresponding heat treatment is mainly based on trial and error approach, which requires tremendous experimental work and resources. Therefore, there is a huge need for tools allowing prediction of mechanical properties of tool steels as a function of composition and heat treatment process variables. The aim of the present work was to explore the potential and possibilities of artificial neural network-based modeling to select and optimize vacuum heat treatment conditions depending on the hot work tool steel composition and required properties. In the current case training of the feedforward neural network with error backpropagation training scheme and four layers of neurons (8-20-20-2) scheme was based on the experimentally obtained tempering diagrams for ten different hot work tool steel compositions and at least two austenitizing temperatures. Results show that this type of modeling can be successfully used for detailed and multifunctional analysis of different influential parameters as well as to optimize heat treatment process of hot work tool steels depending on the composition. In terms of composition, V was found as the most beneficial alloying element increasing hardness and fracture toughness of hot work tool steel; Si, Mn, and Cr increase hardness but lead to reduced fracture toughness, while Mo has the opposite effect. Optimum concentration providing high KIc/HRC ratios would include 0.75 pct Si, 0.4 pct Mn, 5.1 pct Cr, 1.5 pct Mo, and 0.5 pct V, with the optimum heat treatment performed at lower austenitizing and intermediate tempering temperatures.

  13. Optimizing nanomedicine pharmacokinetics using physiologically based pharmacokinetics modelling

    PubMed Central

    Moss, Darren Michael; Siccardi, Marco

    2014-01-01

    The delivery of therapeutic agents is characterized by numerous challenges including poor absorption, low penetration in target tissues and non-specific dissemination in organs, leading to toxicity or poor drug exposure. Several nanomedicine strategies have emerged as an advanced approach to enhance drug delivery and improve the treatment of several diseases. Numerous processes mediate the pharmacokinetics of nanoformulations, with the absorption, distribution, metabolism and elimination (ADME) being poorly understood and often differing substantially from traditional formulations. Understanding how nanoformulation composition and physicochemical properties influence drug distribution in the human body is of central importance when developing future treatment strategies. A helpful pharmacological tool to simulate the distribution of nanoformulations is represented by physiologically based pharmacokinetics (PBPK) modelling, which integrates system data describing a population of interest with drug/nanoparticle in vitro data through a mathematical description of ADME. The application of PBPK models for nanomedicine is in its infancy and characterized by several challenges. The integration of property–distribution relationships in PBPK models may benefit nanomedicine research, giving opportunities for innovative development of nanotechnologies. PBPK modelling has the potential to improve our understanding of the mechanisms underpinning nanoformulation disposition and allow for more rapid and accurate determination of their kinetics. This review provides an overview of the current knowledge of nanomedicine distribution and the use of PBPK modelling in the characterization of nanoformulations with optimal pharmacokinetics. Linked Articles This article is part of a themed section on Nanomedicine. To view the other articles in this section visit http://dx.doi.org/10.1111/bph.2014.171.issue-17 PMID:24467481

  14. A canopy-type similarity model for wind farm optimization

    NASA Astrophysics Data System (ADS)

    Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando

    2013-04-01

    The atmospheric boundary layer (ABL) flow through and over wind farms has been found to be similar to canopy-type flows, with characteristic flow development and shear penetration length scales (Markfort et al., 2012). Wind farms capture momentum from the ABL both at the leading edge and from above. We examine this further with an analytical canopy-type model. Within the flow development region, momentum is advected into the wind farm and wake turbulence draws excess momentum in from between turbines. This spatial heterogeneity of momentum within the wind farm is characterized by large dispersive momentum fluxes. Once the flow within the farm is developed, the area-averaged velocity profile exhibits a characteristic inflection point near the top of the wind farm, similar to that of canopy-type flows. The inflected velocity profile is associated with the presence of a dominant characteristic turbulence scale, which may be responsible for a significant portion of the vertical momentum flux. Prediction of this scale is useful for determining the amount of available power for harvesting. The new model is tested with results from wind tunnel experiments, which were conducted to characterize the turbulent flow in and above model wind farms in aligned and staggered configurations. The model is useful for representing wind farms in regional scale models, for the optimization of wind farms considering wind turbine spacing and layout configuration, and for assessing the impacts of upwind wind farms on nearby wind resources. Markfort CD, W Zhang and F Porté-Agel. 2012. Turbulent flow and scalar transport through and over aligned and staggered wind farms. Journal of Turbulence. 13(1) N33: 1-36. doi:10.1080/14685248.2012.709635.

  15. Modeling and optimization of energy storage system for microgrid

    NASA Astrophysics Data System (ADS)

    Qiu, Xin

    The vanadium redox flow battery (VRB) is well suited for the applications of microgrid and renewable energy. This thesis will have a practical analysis of the battery itself and its application in microgrid systems. The first paper analyzes the VRB use in a microgrid system. The first part of the paper develops a reduced order circuit model of the VRB and analyzes its experimental performance efficiency during deployment. The statistical methods and neural network approximation are used to estimate the system parameters. The second part of the paper addresses the implementation issues of the VRB application in a photovoltaic-based microgrid system. A new dc-dc converter was proposed to provide improved charging performance. The paper was published on IEEE Transactions on Smart Grid, Vol. 5, No. 4, July 2014. The second paper studies VRB use within a microgrid system from a practical perspective. A reduced order circuit model of the VRB is introduced that includes the losses from the balance of plant including system and environmental controls. The proposed model includes the circulation pumps and the HVAC system that regulates the environment of the VRB enclosure. In this paper, the VRB model is extended to include the ESS environmental controls to provide a model that provides a more realistic efficiency profile. The paper was submitted to IEEE Transactions on Sustainable Energy. Third paper discussed the optimal control strategy when VRB works with other type of battery in a microgird system. The work in first paper is extended. A high level control strategy is developed to coordinate a lead acid battery and a VRB with reinforcement learning. The paper is to be submitted to IEEE Transactions on Smart Grid.

  16. Optimal hemodynamic response model for functional near-infrared spectroscopy

    PubMed Central

    Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668

  17. Development of a coupled model of a distributed hydrological model and a rice growth model for optimizing irrigation schedule

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Kumiko; Homma, Koki; Koike, Toshio; Ohta, Tetsu

    2013-04-01

    A coupled model of a distributed hydrological model and a rice growth model was developed in this study. The distributed hydrological model used in this study is the Water and Energy Budget-based Distributed Hydrological Model (WEB-DHM) developed by Wang et al. (2009). This model includes a modified SiB2 (Simple Biosphere Model, Sellers et al., 1996) and the Geomorphology-Based Hydrological Model (GBHM) and thus it can physically calculate both water and energy fluxes. The rice growth model used in this study is the Simulation Model for Rice-Weather relations (SIMRIW) - rainfed developed by Homma et al. (2009). This is an updated version of the original SIMRIW (Horie et al., 1987) and can calculate rice growth by considering the yield reduction due to water stress. The purpose of the coupling is the integration of hydrology and crop science to develop a tool to support decision making 1) for determining the necessary agricultural water resources and 2) for allocating limited water resources to various sectors. The efficient water use and optimal water allocation in the agricultural sector are necessary to balance supply and demand of limited water resources. In addition, variations in available soil moisture are the main reasons of variations in rice yield. In our model, soil moisture and the Leaf Area Index (LAI) are calculated inside SIMRIW-rainfed so that these variables can be simulated dynamically and more precisely based on the rice than the more general calculations is the original WEB-DHM. At the same time by coupling SIMRIW-rainfed with WEB-DHM, lateral flow of soil water, increases in soil moisture and reduction of river discharge due to the irrigation, and its effects on the rice growth can be calculated. Agricultural information such as planting date, rice cultivar, fertilization amount are given in a fully distributed manner. The coupled model was validated using LAI and soil moisture in a small basin in western Cambodia (Sangker River Basin). This

  18. Pulsed pumping process optimization using a potential flow model.

    PubMed

    Tenney, C M; Lastoskie, C M

    2007-08-15

    A computational model is applied to the optimization of pulsed pumping systems for efficient in situ remediation of groundwater contaminants. In the pulsed pumping mode of operation, periodic rather than continuous pumping is used. During the pump-off or trapping phase, natural gradient flow transports contaminated groundwater into a treatment zone surrounding a line of injection and extraction wells that transect the contaminant plume. Prior to breakthrough of the contaminated water from the treatment zone, the wells are activated and the pump-on or treatment phase ensues, wherein extracted water is augmented to stimulate pollutant degradation and recirculated for a sufficient period of time to achieve mandated levels of contaminant removal. An important design consideration in pulsed pumping groundwater remediation systems is the pumping schedule adopted to best minimize operational costs for the well grid while still satisfying treatment requirements. Using an analytic two-dimensional potential flow model, optimal pumping frequencies and pumping event durations have been investigated for a set of model aquifer-well systems with different well spacings and well-line lengths, and varying aquifer physical properties. The results for homogeneous systems with greater than five wells and moderate to high pumping rates are reduced to a single, dimensionless correlation. Results for heterogeneous systems are presented graphically in terms of dimensionless parameters to serve as an efficient tool for initial design and selection of the pumping regimen best suited for pulsed pumping operation for a particular well configuration and extraction rate. In the absence of significant retardation or degradation during the pump-off phase, average pumping rates for pulsed operation were found to be greater than the continuous pumping rate required to prevent contaminant breakthrough.

  19. Metroplex Optimization Model Expansion and Analysis: The Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM)

    NASA Technical Reports Server (NTRS)

    Sherry, Lance; Ferguson, John; Hoffman, Karla; Donohue, George; Beradino, Frank

    2012-01-01

    This report describes the Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM) that is designed to provide insights into airline decision-making with regards to markets served, schedule of flights on these markets, the type of aircraft assigned to each scheduled flight, load factors, airfares, and airline profits. The main inputs to the model are hedged fuel prices, airport capacity limits, and candidate markets. Embedded in the model are aircraft performance and associated cost factors, and willingness-to-pay (i.e. demand vs. airfare curves). Case studies demonstrate the application of the model for analysis of the effects of increased capacity and changes in operating costs (e.g. fuel prices). Although there are differences between airports (due to differences in the magnitude of travel demand and sensitivity to airfare), the system is more sensitive to changes in fuel prices than capacity. Further, the benefits of modernization in the form of increased capacity could be undermined by increases in hedged fuel prices

  20. Modeling digital breast tomosynthesis imaging systems for optimization studies

    NASA Astrophysics Data System (ADS)

    Lau, Beverly Amy

    Digital breast tomosynthesis (DBT) is a new imaging modality for breast imaging. In tomosynthesis, multiple images of the compressed breast are acquired at different angles, and the projection view images are reconstructed to yield images of slices through the breast. One of the main problems to be addressed in the development of DBT is the optimal parameter settings to obtain images ideal for detection of cancer. Since it would be unethical to irradiate women multiple times to explore potentially optimum geometries for tomosynthesis, it is ideal to use a computer simulation to generate projection images. Existing tomosynthesis models have modeled scatter and detector without accounting for oblique angles of incidence that tomosynthesis introduces. Moreover, these models frequently use geometry-specific physical factors measured from real systems, which severely limits the robustness of their algorithms for optimization. The goal of this dissertation was to design the framework for a computer simulation of tomosynthesis that would produce images that are sensitive to changes in acquisition parameters, so an optimization study would be feasible. A computer physics simulation of the tomosynthesis system was developed. The x-ray source was modeled as a polychromatic spectrum based on published spectral data, and inverse-square law was applied. Scatter was applied using a convolution method with angle-dependent scatter point spread functions (sPSFs), followed by scaling using an angle-dependent scatter-to-primary ratio (SPR). Monte Carlo simulations were used to generate sPSFs for a 5-cm breast with a 1-cm air gap. Detector effects were included through geometric propagation of the image onto layers of the detector, which were blurred using depth-dependent detector point-spread functions (PRFs). Depth-dependent PRFs were calculated every 5-microns through a 200-micron thick CsI detector using Monte Carlo simulations. Electronic noise was added as Gaussian noise as a

  1. Multiscale modelling of hydrothermal biomass pretreatment for chip size optimization.

    PubMed

    Hosseini, Seyed Ali; Shah, Nilay

    2009-05-01

    The objective of this work is to develop a relationship between biomass chip size and the energy requirement of hydrothermal pretreatment processes using a multiscale modelling approach. The severity factor or modified severity factor is currently used to characterize some hydrothermal pretreatment methods. Although these factors enable an easy comparison of experimental results to facilitate process design and operation, they are not representative of all the factors affecting the efficiency of pretreatment, because processes with the same temperature, residence time, and pH will not have same effect on biomass chips of different size. In our study, a model based on the diffusion of liquid or steam in the biomass that takes into account the interrelationship between chip size and time is developed. With the aid of our developed model, a method to find the optimum chip size that minimizes the energy requirement of grinding and pretreatment processes is proposed. We show that with the proposed optimization method, an average saving equivalent to a 5% improvement in the yield of biomass to ethanol conversion process can be achieved.

  2. Modeling the minimum enzymatic requirements for optimal cellulose conversion

    NASA Astrophysics Data System (ADS)

    den Haan, R.; van Zyl, J. M.; Harms, T. M.; van Zyl, W. H.

    2013-06-01

    Hydrolysis of cellulose is achieved by the synergistic action of endoglucanases, exoglucanases and β-glucosidases. Most cellulolytic microorganisms produce a varied array of these enzymes and the relative roles of the components are not easily defined or quantified. In this study we have used partially purified cellulases produced heterologously in the yeast Saccharomyces cerevisiae to increase our understanding of the roles of some of these components. CBH1 (Cel7), CBH2 (Cel6) and EG2 (Cel5) were separately produced in recombinant yeast strains, allowing their isolation free of any contaminating cellulolytic activity. Binary and ternary mixtures of the enzymes at loadings ranging between 3 and 100 mg g-1 Avicel allowed us to illustrate the relative roles of the enzymes and their levels of synergy. A mathematical model was created to simulate the interactions of these enzymes on crystalline cellulose, under both isolated and synergistic conditions. Laboratory results from the various mixtures at a range of loadings of recombinant enzymes allowed refinement of the mathematical model. The model can further be used to predict the optimal synergistic mixes of the enzymes. This information can subsequently be applied to help to determine the minimum protein requirement for complete hydrolysis of cellulose. Such knowledge will be greatly informative for the design of better enzymatic cocktails or processing organisms for the conversion of cellulosic biomass to commodity products.

  3. Results of Satellite Brightness Modeling Using Kringing Optimized Interpolation

    NASA Astrophysics Data System (ADS)

    Weeden, C.; Hejduk, M.

    At the 2005 AMOS conference, Kriging Optimized Interpolation (KOI) was presented as a tool to model satellite brightness as a function of phase angle and solar declination angle (J.M Okada and M.D. Hejduk). Since November 2005, this method has been used to support the tasking algorithm for all optical sensors in the Space Surveillance Network (SSN). The satellite brightness maps generated by the KOI program are compared to each sensor's ability to detect an object as a function of the brightness of the background sky and angular rate of the object. This will determine if the sensor can technically detect an object based on an explicit calculation of the object's probability of detection. In addition, recent upgrades at Ground-Based Electro Optical Deep Space Surveillance Sites (GEODSS) sites have increased the amount and quality of brightness data collected and therefore available for analysis. This in turn has provided enough data to study the modeling process in more detail in order to obtain the most accurate brightness prediction of satellites. Analysis of two years of brightness data gathered from optical sensors and modeled via KOI solutions are outlined in this paper. By comparison, geo-stationary objects (GEO) were tracked less than non-GEO objects but had higher density tracking in phase angle due to artifices of scheduling. A statistically-significant fit to a deterministic model was possible less than half the time in both GEO and non-GEO tracks, showing that a stochastic model must often be used alone to produce brightness results, but such results are nonetheless serviceable. Within the Kriging solution, the exponential variogram model was the most frequently employed in both GEO and non-GEO tracks, indicating that monotonic brightness variation with both phase and solar declination angle is common and testifying to the suitability to the application of regionalized variable theory to this particular problem. Finally, the average nugget value, or

  4. [Optimal models on sustainable management of oases ecosystem in southern margin of Taklamakan Desert].

    PubMed

    Li, X; Zhang, X; Wang, Y; Wu, Y

    2000-12-01

    On the basis of analyzing the distribution feature of water resource and the canal water utilization coefficient of oases in southern margin of Taklamakan Desert, observing the wind prevention efficiency of shelterbelt through a simulation experiment in wind tunnel, and 15 years researching the comprehensive control of desertified land in Cele Oasis, a series of optimal models on sustainable management of oases ecosystem is southern margin of Taklamakan Desert were proposed i.e., the optimal model on "moderated osais", the optimal model on structure of wind-breaks, the optimal model on comprehensive control of desertified land, and the optimal model on planting structure of corps.

  5. An optimization model to agroindustrial sector in antioquia (Colombia, South America)

    NASA Astrophysics Data System (ADS)

    Fernandez, J.

    2015-06-01

    This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.

  6. Finite Element Modeling and Optimization of Mechanical Joining Technology

    NASA Astrophysics Data System (ADS)

    Chenot, Jean-Loup; Bouchard, Pierre-Olivier; Massoni, Elisabeth; Mocellin, Katia; Lasne, Patrice

    2011-05-01

    The main scientific ingredients are recalled for developing a general finite element code and model accurately large plastic deformation of metallic materials during joining processes. Multi material contact is treated using the classical master and slave approach. Rupture may occur in joining processes or even be imposed in self piercing riveting and it must be predicted to evaluate the ultimate strength of joins. Damage is introduced with a generalized uncoupled damage criterion, or by utilizing a coupled formulation with a Lemaître law. Several joining processes are briefly analyzed in term of specific scientific issues: riveting, self piercing riveting, clinching, crimping, hemming and screwing. It is shown that not only the joining process can be successfully simulated and optimized, but also the strength of the assembly can be predicted in tension and in shearing.

  7. The spa as a model of an optimal healing environment.

    PubMed

    Frost, Gary J

    2004-01-01

    "Spa" is an acronym for salus per aqua, or health through water. There currently are approximately 10,000 spas of all types in the United States. Most now focus on eating and weight programs with subcategories of sports activities and nutrition most prominent. The main reasons stated by clients for their use are stress reduction, specific medical or other health issues, eating and weight loss, rest and relaxation, fitness and exercise, and pampering and beauty. A detailed description of the Canyon Ranch, a spa facility in Tucson, AZ, is presented as a case study in this paper. It appears that the three most critical factors in creating an optimal healing environment in a spa venue are (1) a dedicated caring staff at all levels, (2) a mission driven organization that will not compromise, and (3) a sound business model and leadership that will ensure permanency.

  8. Simulation and optimization models for emergency medical systems planning.

    PubMed

    Bettinelli, Andrea; Cordone, Roberto; Ficarelli, Federico; Righini, Giovanni

    2014-01-01

    The authors address strategic planning problems for emergency medical systems (EMS). In particular, the three following critical decisions are considered: i) how many ambulances to deploy in a given territory at any given point in time, to meet the forecasted demand, yielding an appropriate response time; ii) when ambulances should be used for serving nonurgent requests and when they should better be kept idle for possible incoming urgent requests; iii) how to define an optimal mix of contracts for renting ambulances from private associations to meet the forecasted demand at minimum cost. In particular, analytical models for decision support, based on queuing theory, discrete-event simulation, and integer linear programming were presented. Computational experiments have been done on real data from the city of Milan, Italy.

  9. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework.

    PubMed

    Yang, Guoxiang; Best, Elly P H

    2015-09-15

    Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions.

  10. Multi-model Simulation for Optimal Control of Aeroacoustics.

    SciTech Connect

    Collis, Samuel Scott; Chen, Guoquan

    2005-05-01

    Flow-generated noise, especially rotorcraft noise has been a serious concern for bothcommercial and military applications. A particular important noise source for rotor-craft is Blade-Vortex-Interaction (BVI)noise, a high amplitude, impulsive sound thatoften dominates other rotorcraft noise sources. Usually BVI noise is caused by theunsteady flow changes around various rotor blades due to interactions with vorticespreviously shed by the blades. A promising approach for reducing the BVI noise isto use on-blade controls, such as suction/blowing, micro-flaps/jets, and smart struc-tures. Because the design and implementation of such experiments to evaluate suchsystems are very expensive, efficient computational tools coupled with optimal con-trol systems are required to explore the relevant physics and evaluate the feasibilityof using various micro-fluidic devices before committing to hardware.In this thesis the research is to formulate and implement efficient computationaltools for the development and study of optimal control and design strategies for com-plex flow and acoustic systems with emphasis on rotorcraft applications, especiallyBVI noise control problem. The main purpose of aeroacoustic computations is todetermine the sound intensity and directivity far away from the noise source. How-ever, the computational cost of using a high-fidelity flow-physics model across thefull domain is usually prohibitive and itmight also be less accurate because of thenumerical diffusion and other problems. Taking advantage of the multi-physics andmulti-scale structure of this aeroacoustic problem, we develop a multi-model, multi-domain (near-field/far-field) method based on a discontinuous Galerkin discretiza-tion. In this approach the coupling of multi-domains and multi-models is achievedby weakly enforcing continuity of normal fluxes across a coupling surface. For ourinterested aeroacoustics control problem, the adjoint equations that determine thesensitivity of the cost

  11. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  12. Optimal Control of Distributed Energy Resources using Model Predictive Control

    SciTech Connect

    Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen

    2012-07-22

    In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.

  13. Managing and learning with multiple models: Objectives and optimization algorithms

    USGS Publications Warehouse

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  14. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    NASA Technical Reports Server (NTRS)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  15. Optimal control of an asymptotic model of flow separation

    NASA Astrophysics Data System (ADS)

    Qadri, Ubaid; Schmid, Peter; LFC-UK Team

    2015-11-01

    In the presence of surface imperfections, the boundary layer developing over an aircraft wing can separate and reattach, leading to a small separation bubble. We are interested in developing a low-order model that can be used to control the onset of separation at high Reynolds numbers typical of aircraft flight. In contrast to previous studies, we use a high Reynolds number asymptotic description of the Navier-Stokes equations to describe the motion of motion of the fluid. We obtain a steady solution to the nonlinear triple-deck equations for the separated flow over a small bump at high Reynolds numbers. We derive for the first time the adjoint of the nonlinear triple-deck equations and use it to study optimal control of the separated flow. We calculate the sensitivity of the properties of the separation bubble to local base flow modifications and steady forcing. We assess the validity of using this simplified asymptotic model by comparing our results with those obtained using the full Navier-Stokes equations.

  16. 3D modeling and optimization of the ITER ICRH antenna

    NASA Astrophysics Data System (ADS)

    Louche, F.; Dumortier, P.; Durodié, F.; Messiaen, A.; Maggiora, R.; Milanesio, D.

    2011-12-01

    The prediction of the coupling properties of the ITER ICRH antenna necessitates the accurate evaluation of the resistance and reactance matrices. The latter are mostly dependent on the geometry of the array and therefore a model as accurate as possible is needed to precisely compute these matrices. Furthermore simulations have so far neglected the poloidal and toroidal profile of the plasma, and it is expected that the loading by individual straps will vary significantly due to varying strap-plasma distance. To take this curvature into account, some modifications of the alignment of the straps with respect to the toroidal direction are proposed. It is shown with CST Microwave Studio® [1] that considering two segments in the toroidal direction, i.e. a "V-shaped" toroidal antenna, is sufficient. A new CATIA model including this segmentation has been drawn and imported into both MWS and TOPICA [2] codes. Simulations show a good agreement of the impedance matrices in vacuum. Various modifications of the geometry are proposed in order to further optimize the coupling. In particular we study the effect of the strap box parameters and the recess of the vertical septa.

  17. Culture optimization for the emergent zooplanktonic model organism Oikopleura dioica

    PubMed Central

    Bouquet, Jean-Marie; Spriet, Endy; Troedsson, Christofer; Otterå, Helen; Chourrout, Daniel; Thompson, Eric M.

    2009-01-01

    The pan-global marine appendicularian, Oikopleura dioica, shows considerable promise as a candidate model organism for cross-disciplinary research ranging from chordate genetics and evolution to molecular ecology research. This urochordate, has a simplified anatomical organization, remains transparent throughout an exceptionally short life cycle of less than 1 week and exhibits high fecundity. At 70 Mb, the compact, sequenced genome ranks among the smallest known metazoan genomes, with both gene regulatory and intronic regions highly reduced in size. The organism occupies an important trophic role in marine ecosystems and is a significant contributor to global vertical carbon flux. Among the short list of bona fide biological model organisms, all share the property that they are amenable to long-term maintenance in laboratory cultures. Here, we tested diet regimes, spawn densities and dilutions and seawater treatment, leading to optimization of a detailed culture protocol that permits sustainable long-term maintenance of O. dioica, allowing continuous, uninterrupted production of source material for experimentation. The culture protocol can be quickly adapted in both coastal and inland laboratories and should promote rapid development of the many original research perspectives the animal offers. PMID:19461862

  18. In silico strain optimization by adding reactions to metabolic models.

    PubMed

    Correia, Sara; Rocha, Miguel

    2012-07-24

    Nowadays, the concerns about the environment and the needs to increase the productivity at low costs, demand for the search of new ways to produce compounds with industrial interest. Based on the increasing knowledge of biological processes, through genome sequencing projects, and high-throughput experimental techniques as well as the available computational tools, the use of microorganisms has been considered as an approach to produce desirable compounds. However, this usually requires to manipulate these organisms by genetic engineering and/ or changing the enviromental conditions to make the production of these compounds possible. In many cases, it is necessary to enrich the genetic material of those microbes with hereologous pathways from other species and consequently adding the potential to produce novel compounds. This paper introduces a new plug-in for the OptFlux Metabolic Engineering platform, aimed at finding suitable sets of reactions to add to the genomes of selected microbes (wild type strain), as well as finding complementary sets of deletions, so that the mutant becomes able to overproduce compounds with industrial interest, while preserving their viability. The necessity of adding reactions to the metabolic model arises from existing gaps in the original model or motivated by the productions of new compounds by the organism. The optimization methods used are metaheuristics such as Evolutionary Algorithms and Simulated Annealing. The usefulness of this plug-in is demonstrated by a case study, regarding the production of vanillin by the bacterium E. coli.

  19. Modeling, design, and optimization of Mindwalker series elastic joint.

    PubMed

    Wang, Shiqian; Meijneke, Cor; van der Kooij, Herman

    2013-06-01

    Weight and power autonomy are limiting the daily use of wearable exoskeleton. Lightweight, efficient and powerful actuation system are not easy to achieve. Choosing the right combinations of existing technologies, such as battery, gear and motor is not a trivial task. In this paper, we propose an optimization framework by setting up a power-based quasi-static model of the exoskeleton joint drivetrain. The goal is to find the most efficient and lightweight combinations. This framework can be generalized for other similar applications by extending or accommodating the model to their own needs. We also present the Mindwalker exoskeleton joint, for which a novel series elastic actuator, consisting of a ballscrew-driven linear actuator and a double spiral spring, was developed and tested. This linear actuator is capable of outputting 960 W power and the exoskeleton joint can output 100 Nm peak torque continuously. The double spiral spring can sense torque between 0.08Nm and 100 Nm and it exhibits linearity of 99.99%, with no backlash or hysteresis. The series elastic joint can track a chirp torque profile with amplitude of 100 Nm over 6 Hz (large torque bandwidth) and for small torque (2 Nm peak-to-peak), it has a bandwidth over 38 Hz. The integrated exoskeleton joint, including the ballscrew-driven linear actuator, the series spring, electronics and the metal housing which hosts these components, weighs 2.9 kg.

  20. Essays on Applied Resource Economics Using Bioeconomic Optimization Models

    NASA Astrophysics Data System (ADS)

    Affuso, Ermanno

    With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.

  1. Clean wing airframe noise modeling for multidisciplinary design and optimization

    NASA Astrophysics Data System (ADS)

    Hosder, Serhat

    A new noise metric has been developed that may be used for optimization problems involving aerodynamic noise from a clean wing. The modeling approach uses a classical trailing edge noise theory as the starting point. The final form of the noise metric includes characteristic velocity and length scales that are obtained from three-dimensional, steady, RANS simulations with a two equation k-o turbulence model. The noise metric is not the absolute value of the noise intensity, but an accurate relative noise measure as shown in the validation studies. One of the unique features of the new noise metric is the modeling of the length scale, which is directly related to the turbulent structure of the flow at the trailing edge. The proposed noise metric model has been formulated so that it can capture the effect of different design variables on the clean wing airframe noise such as the aircraft speed, lift coefficient, and wing geometry. It can also capture three dimensional effects which become important at high lift coefficients, since the characteristic velocity and the length scales are allowed to vary along the span of the wing. Noise metric validation was performed with seven test cases that were selected from a two-dimensional NACA 0012 experimental database. The agreement between the experiment and the predictions obtained with the new noise metric was very good at various speeds, angles of attack, and Reynolds Number, which showed that the noise metric is capable of capturing the variations in the trailing edge noise as a relative noise measure when different flow conditions and parameters are changed. Parametric studies were performed to investigate the effect of different design variables on the noise metric. Two-dimensional parametric studies were done using two symmetric NACA four-digit airfoils (NACA 0012 and NACA 0009) and two supercritical (SC(2)-0710 and SC(2)-0714) airfoils. The three-dimensional studies were performed with two versions of a conventional

  2. Reducing the model-data misfit in a marine ecosystem model using periodic parameters and linear quadratic optimal control

    NASA Astrophysics Data System (ADS)

    El Jarbi, M.; Rückelt, J.; Slawig, T.; Oschlies, A.

    2013-02-01

    This paper presents the application of the Linear Quadratic Optimal Control (LQOC) method to a parameter optimization problem for a one-dimensional marine ecosystem model of NPZD (N for dissolved inorganic nitrogen, P for phytoplankton, Z for zooplankton and D for detritus) type. This ecosystem model, developed by Oschlies and Garcon, simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. The LQOC method is used to introduce annually periodic model parameters in a linearized version of the model. We show that the obtained version of the model gives a significant reduction of the model-data misfit, compared to the one obtained for the original model with optimized constant parameters. The found inner-annual variability of the optimized parameters provides hints for improvement of the original model. We use the obtained optimal periodic parameters also in validation and prediction experiments with the original non-linear version of the model. In both cases, the results are significantly better than those obtained with optimized constant parameters.

  3. Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.

    PubMed

    Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen

    2016-07-01

    This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.

  4. Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults

    NASA Astrophysics Data System (ADS)

    Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen

    2016-07-01

    This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.

  5. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    NASA Astrophysics Data System (ADS)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  6. Optimization of a Parallel Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Chao, Yi

    1997-01-01

    Global climate modeling is one of the grand chalenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting the future climate change.

  7. Heterogeneous Nuclear Reactor Models for Optimal Xenon Control.

    NASA Astrophysics Data System (ADS)

    Gondal, Ishtiaq Ahmad

    Nuclear reactors are generally modeled as homogeneous mixtures of fuel, control, and other materials while in reality they are heterogeneous-homogeneous configurations comprised of fuel and control rods along with other materials. Similarly, for space-time studies of a nuclear reactor, homogeneous, usually one-group diffusion theory, models are used, and the system equations are solved by either nodal or modal expansion approximations. Study of xenon-induced problems has also been carried out using similar models and with the help of dynamic programming or classical calculus of variations or the minimum principle. In this study a thermal nuclear reactor is modeled as a two-dimensional lattice of fuel and control rods placed in an infinite-moderator in plane geometry. The two-group diffusion theory approximation is used for neutron transport. Space -time neutron balance equations are written for two groups and reduced to one space-time algebraic equation by using the two-dimensional Fourier transform. This equation is written at all fuel and control rod locations. Iodine -xenon and promethium-samarium dynamic equations are also written at fuel rod locations only. These equations are then linearized about an equilibrium point which is determined from the steady-state form of the original nonlinear system equations. After studying poisonless criticality, with and without control, and the stability of the open-loop system and after checking its controllability, a performance criterion is defined for the xenon-induced spatial flux oscillation problem in the form of a functional to be minimized. Linear -quadratic optimal control theory is then applied to solve the problem. To perform a variety of different additional useful studies, this formulation has potential for various extensions and variations; for example, different geometry of the problem, with possible extension to three dimensions, heterogeneous -homogeneous formulation to include, for example, homogeneously

  8. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  9. Model optimization of orthotropic distributed-mode loudspeaker using attached masses.

    PubMed

    Lu, Guochao; Shen, Yong

    2009-11-01

    The orthotropic model of the plate is established and the genetic simulated annealing algorithm is developed for optimization of the mode distribution of the orthotropic plate. The experiment results indicate that the orthotropic model can simulate the real plate better. And optimization aimed at the equal distribution of the modes in the orthotropic model is made to improve the corresponding sound pressure responses.

  10. Pareto optimal calibration of highly nonlinear reactive transport groundwater models using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Siade, A. J.; Prommer, H.; Welter, D.

    2014-12-01

    Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site

  11. Optimization and evolution in metabolic pathways: global optimization techniques in Generalized Mass Action models.

    PubMed

    Sorribas, Albert; Pozo, Carlos; Vilaprinyo, Ester; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Alves, Rui

    2010-09-01

    Cells are natural factories that can adapt to changes in external conditions. Their adaptive responses to specific stress situations are a result of evolution. In theory, many alternative sets of coordinated changes in the activity of the enzymes of each pathway could allow for an appropriate adaptive readjustment of metabolism in response to stress. However, experimental and theoretical observations show that actual responses to specific changes follow fairly well defined patterns that suggest an evolutionary optimization of that response. Thus, it is important to identify functional effectiveness criteria that may explain why certain patterns of change in cellular components and activities during adaptive response have been preferably maintained over evolutionary time. Those functional effectiveness criteria define sets of physiological requirements that constrain the possible adaptive changes and lead to different operation principles that could explain the observed response. Understanding such operation principles can also facilitate biotechnological and metabolic engineering applications. Thus, developing methods that enable the analysis of cellular responses from the perspective of identifying operation principles may have strong theoretical and practical implications. In this paper we present one such method that was designed based on nonlinear global optimization techniques. Our methodology can be used with a special class of nonlinear kinetic models known as GMA models and it allows for a systematic characterization of the physiological requirements that may underlie the evolution of adaptive strategies.

  12. Optimization of GM(1,1) power model

    NASA Astrophysics Data System (ADS)

    Luo, Dang; Sun, Yu-ling; Song, Bo

    2013-10-01

    GM (1,1) power model is the expansion of traditional GM (1,1) model and Grey Verhulst model. Compared with the traditional models, GM (1,1) power model has the following advantage: The power exponent in the model which best matches the actual data values can be found by certain technology. So, GM (1,1) power model can reflect nonlinear features of the data, simulate and forecast with high accuracy. It's very important to determine the best power exponent during the modeling process. In this paper, according to the GM(1,1) power model of albino equation is Bernoulli equation, through variable substitution, turning it into the GM(1,1) model of the linear albino equation form, and then through the grey differential equation properly built, established GM(1,1) power model, and parameters with pattern search method solution. Finally, we illustrate the effectiveness of the new methods with the example of simulating and forecasting the promotion rates from senior secondary schools to higher education in China.

  13. Estimating the Optimal Spatial Complexity of a Water Quality Model Using Multi-Criteria Methods

    NASA Astrophysics Data System (ADS)

    Meixner, T.

    2002-12-01

    Discretizing the landscape into multiple smaller units appears to be a necessary step for improving the performance of water quality models. However there is a need for adequate falsification methods to discern between discretization that improves model performance and discretization that merely adds to model complexity. Multi-criteria optimization methods promise a way to increase the power of model discrimination and a path to increasing our ability in differentiating between good and bad model discretization methods. This study focuses on the optimal level of spatial discretization of a water quality model, the Alpine Hydrochemical Model of the Emerald Lake watershed in Sequoia National Park, California. The 5 models of the watershed differ in the degree of simplification that they represent from the real watershed. The simplest model is just a lumped model of the entire watershed. The most complex model takes the 5 main soil groups in the watershed and represents each with a modeling subunit as well as having subunits for rock and talus areas in the watershed. Each of these models was calibrated using stream discharge and three chemical fluxes jointly as optimization criteria using a Pareto optimization routine, MOCOM-UA. After optimization the 5 models were compared for their performance using model criteria not used in calibration, the variability of model parameter estimates, and comparison to the mean of observations as a predictor of stream chemical composition. Based on these comparisons, the results indicate that the model with only 2 terrestrial subunits had the optimal level of model complexity. This result shows that increasing model complexity, even using detailed site specific data, is not always rewarded with improved model performance. Additionally, this result indicates that the most important geographic element for modeling water quality in alpine watersheds is accurately delineating the boundary between areas of rock and areas containing either

  14. Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.

    PubMed

    An, Yan; Zou, Zhihong; Zhao, Yanfei

    2015-03-01

    An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting.

  15. Optimal SCR Control Using Data-Driven Models

    SciTech Connect

    Stevens, Andrew J.; Sun, Yannan; Lian, Jianming; Devarakonda, Maruthi N.; Parker, Gordon

    2013-04-16

    We present an optimal control solution for the urea injection for a heavy-duty diesel (HDD) selective catalytic reduction (SCR). The approach taken here is useful beyond SCR and could be applied to any system where a control strategy is desired and input-output data is available. For example, the strategy could also be used for the diesel oxidation catalyst (DOC) system. In this paper, we identify and validate a one-step ahead Kalman state-space estimator for downstream NOx using the bench reactor data of an SCR core sample. The test data was acquired using a 2010 Cummins 6.7L ISB production engine with a 2010 Cummins production aftertreatment system. We used a surrogate HDD federal test procedure (FTP), developed at Michigan Technological University (MTU), which simulates the representative transients of the standard FTP cycle, but has less engine speed/load points. The identified state-space model is then used to develop a tunable cost function that simultaneously minimizes NOx emissions and urea usage. The cost function is quadratic and univariate, thus the minimum can be computed analytically. We show the performance of the closed-loop controller in using a reduced-order discrete SCR simulator developed at MTU. Our experiments with the surrogate HDD-FTP data show that the strategy developed in this paper can be used to identify performance bounds for urea dose controllers.

  16. Polymer Electrolyte Membrane (PEM) Fuel Cells Modeling and Optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Zhuqian; Wang, Xia; Shi, Zhongying; Zhang, Xinxin; Yu, Fan

    2006-11-01

    Performance of polymer electrolyte membrane (PEM) fuel cells is dependent on operating parameters and designing parameters. Operating parameters mainly include temperature, pressure, humidity and the flow rate of the inlet reactants. Designing parameters include reactants distributor patterns and dimensions, electrodes dimensions, and electrodes properties such as porosity, permeability and so on. This work aims to investigate the effects of various designing parameters on the performance of PEM fuel cells, and the optimum values will be determined under a given operating condition.A three-dimensional steady-state electrochemical mathematical model was established where the mass, fluid and thermal transport processes are considered as well as the electrochemical reaction. A Powell multivariable optimization algorithm will be applied to investigate the optimum values of designing parameters. The objective function is defined as the maximum potential of the electrolyte fluid phase at the membrane/cathode interface at a typical value of the cell voltage. The robustness of the optimum design of the fuel cell under different cell potentials will be investigated using a statistical sensitivity analysis. By comparing with the reference case, the results obtained here provide useful tools for a better design of fuel cells.

  17. Designing the optimal convolution kernel for modeling the motion blur

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2011-06-01

    Motion blur acts on an image like a two dimensional low pass filter, whose spatial frequency characteristic depends both on the trajectory of the relative motion between the scene and the camera and on the velocity vector variation along it. When motion during exposure is permitted, the conventional, static notions of both the image exposure and the scene-toimage mapping become unsuitable and must be revised to accommodate the image formation dynamics. This paper develops an exact image formation model for arbitrary object-camera relative motion with arbitrary velocity profiles. Moreover, for any motion the camera may operate in either continuous or flutter shutter exposure mode. Its result is a convolution kernel, which is optimally designed for both the given motion and sensor array geometry, and hence permits the most accurate computational undoing of the blurring effects for the given camera required in forensic and high security applications. The theory has been implemented and a few examples are shown in the paper.

  18. Optimal Vaccination in a Stochastic Epidemic Model of Two Non-Interacting Populations

    DTIC Science & Technology

    2015-02-17

    RESEARCH ARTICLE Optimal Vaccination in a Stochastic Epidemic Model of Two Non-Interacting Populations Edwin C. Yuan1,3, David L. Alderson2, Sean...Infected-Recovered (SIR) model. Based on these results, we determine the optimal alloca- tions of a limited quantity of vaccine between two non-interacting... vaccine , the deterministic model is a poor estimate of the optimal strategy for the more realistic, stochastic case. Introduction As rapid, long-range

  19. A Simplified Model of ARIS for Optimal Controller Design

    NASA Technical Reports Server (NTRS)

    Beech, Geoffrey S.; Hampton, R. David; Kross, Denny (Technical Monitor)

    2001-01-01

    Many space-science experiments require active vibration isolation. Boeing's Active Rack Isolation System (ARIS) isolates experiments at the rack (vs. experiment or sub-experiment) level, with multi e experiments per rack. An ARIS-isolated rack typically employs eight actuators and thirteen umbilicals; the umbilicals provide services such as power, data transmission, and cooling. Hampton, et al., used "Kane's method" to develop an analytical, nonlinear, rigid-body model of ARIS that includes full actuator dynamics (inertias). This model, less the umbilicals, was first implemented for simulation by Beech and Hampton; they developed and tested their model using two commercial-off-the-shelf (COTS) software packages. Rupert, et al., added umbilical-transmitted disturbances to this nonlinear model. Because the nonlinear model, even for the untethered system, is both exceedingly complex and "encapsulated" inside these COTS tools, it is largely inaccessible to ARIS controller designers. This paper shows that ISPR rattle-space constraints and small ARIS actuator masses permit considerable model simplification, without significant loss of fidelity. First, for various loading conditions, comparisons are made between the dynamic responses of the nonlinear model (untethered) and a truth model. Then comparisons are made among nonlinear, linearized, and linearized reduced-mass models. It is concluded that these three models all capture the significant system rigid-body dynamics, with the third being preferred due to its relative simplicity.

  20. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  1. Optimal mutation rates in dynamic environments: The eigen model

    NASA Astrophysics Data System (ADS)

    Ancliff, Mark; Park, Jeong-Man

    2011-03-01

    We consider the Eigen quasispecies model with a dynamic environment. For an environment with sharp-peak fitness in which the most-fit sequence moves by k spin-flips each period T we find an asymptotic stationary state in which the quasispecies population changes regularly according to the regular environmental change. From this stationary state we estimate the maximum and the minimum mutation rates for a quasispecies to survive under the changing environment and calculate the optimum mutation rate that maximizes the population growth. Interestingly we find that the optimum mutation rate in the Eigen model is lower than that in the Crow-Kimura model, and at their optimum mutation rates the corresponding mean fitness in the Eigen model is lower than that in the Crow-Kimura model, suggesting that the mutation process which occurs in parallel to the replication process as in the Crow-Kimura model gives an adaptive advantage under changing environment.

  2. Optimization methods for thermal modeling of optomechanical systems

    NASA Technical Reports Server (NTRS)

    Papalexandris, M.; Milman, M.; Levine-West, M.

    2001-01-01

    The proposed numerical techniques are briefly described and compared to existing algorithms. Their accuracy and robustness are demonstrated through numerical tests with models from ongoing NASA missions.

  3. On application of optimal control to SEIR normalized models: Pros and cons.

    PubMed

    de Pinho, Maria do Rosario; Nogueira, Filipa Nunes

    2017-02-01

    In this work we normalize a SEIR model that incorporates exponential natural birth and death, as well as disease-caused death. We use optimal control to control by vaccination the spread of a generic infectious disease described by a normalized model with L1 cost. We discuss the pros and cons of SEIR normalized models when compared with classical models when optimal control with L1 costs are considered. Our discussion highlights the role of the cost. Additionally, we partially validate our numerical solutions for our optimal control problem with normalized models using the Maximum Principle.

  4. Decomposition method of complex optimization model based on global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Qiu, Qingying; Li, Bing; Feng, Peien; Gao, Yu

    2014-07-01

    The current research of the decomposition methods of complex optimization model is mostly based on the principle of disciplines, problems or components. However, numerous coupling variables will appear among the sub-models decomposed, thereby make the efficiency of decomposed optimization low and the effect poor. Though some collaborative optimization methods are proposed to process the coupling variables, there lacks the original strategy planning to reduce the coupling degree among the decomposed sub-models when we start decomposing a complex optimization model. Therefore, this paper proposes a decomposition method based on the global sensitivity information. In this method, the complex optimization model is decomposed based on the principle of minimizing the sensitivity sum between the design functions and design variables among different sub-models. The design functions and design variables, which are sensitive to each other, will be assigned to the same sub-models as much as possible to reduce the impacts to other sub-models caused by the changing of coupling variables in one sub-model. Two different collaborative optimization models of a gear reducer are built up separately in the multidisciplinary design optimization software iSIGHT, the optimized results turned out that the decomposition method proposed in this paper has less analysis times and increases the computational efficiency by 29.6%. This new decomposition method is also successfully applied in the complex optimization problem of hydraulic excavator working devices, which shows the proposed research can reduce the mutual coupling degree between sub-models. This research proposes a decomposition method based on the global sensitivity information, which makes the linkages least among sub-models after decomposition, and provides reference for decomposing complex optimization models and has practical engineering significance.

  5. Regression Model Optimization for the Analysis of Experimental Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  6. Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Khalilnia, M. H.; Ghaemirad, T.; Abbaspour, R. A.

    2013-09-01

    In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM+ for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75 %.

  7. High-throughput generation, optimization and analysis of genome-scale metabolic models.

    SciTech Connect

    Henry, C. S.; DeJongh, M.; Best, A. A.; Frybarger, P. M.; Linsay, B.; Stevens, R. L.

    2010-09-01

    Genome-scale metabolic models have proven to be valuable for predicting organism phenotypes from genotypes. Yet efforts to develop new models are failing to keep pace with genome sequencing. To address this problem, we introduce the Model SEED, a web-based resource for high-throughput generation, optimization and analysis of genome-scale metabolic models. The Model SEED integrates existing methods and introduces techniques to automate nearly every step of this process, taking {approx}48 h to reconstruct a metabolic model from an assembled genome sequence. We apply this resource to generate 130 genome-scale metabolic models representing a taxonomically diverse set of bacteria. Twenty-two of the models were validated against available gene essentiality and Biolog data, with the average model accuracy determined to be 66% before optimization and 87% after optimization.

  8. Optimal mutation rates in dynamic environments: The Eigen model

    NASA Astrophysics Data System (ADS)

    Ancliff, Mark; Park, Jeong-Man

    2010-08-01

    We consider the Eigen quasispecies model with a dynamic environment. For an environment with sharp-peak fitness in which the most-fit sequence moves by k spin-flips each period T we find an asymptotic stationary state in which the quasispecies population changes regularly according to the regular environmental change. From this stationary state we estimate the maximum and the minimum mutation rates for a quasispecies to survive under the changing environment and calculate the optimum mutation rate that maximizes the population growth. Interestingly we find that the optimum mutation rate in the Eigen model is lower than that in the Crow-Kimura model, and at their optimum mutation rates the corresponding mean fitness in the eigenmodel is lower than that in the Crow-Kimura model, suggesting that the mutation process which occurs in parallel to the replication process as in the Crow-Kimura model gives an adaptive advantage under changing environment.

  9. Oneida Tribe of Indians of Wisconsin Energy Optimization Model

    SciTech Connect

    Troge, Michael

    2014-12-01

    Oneida Nation is located in Northeast Wisconsin. The reservation is approximately 96 square miles (8 miles x 12 miles), or 65,000 acres. The greater Green Bay area is east and adjacent to the reservation. A county line roughly splits the reservation in half; the west half is in Outagamie County and the east half is in Brown County. Land use is predominantly agriculture on the west 2/3 and suburban on the east 1/3 of the reservation. Nearly 5,000 tribally enrolled members live in the reservation with a total population of about 21,000. Tribal ownership is scattered across the reservation and is about 23,000 acres. Currently, the Oneida Tribe of Indians of Wisconsin (OTIW) community members and facilities receive the vast majority of electrical and natural gas services from two of the largest investor-owned utilities in the state, WE Energies and Wisconsin Public Service. All urban and suburban buildings have access to natural gas. About 15% of the population and five Tribal facilities are in rural locations and therefore use propane as a primary heating fuel. Wood and oil are also used as primary or supplemental heat sources for a small percent of the population. Very few renewable energy systems, used to generate electricity and heat, have been installed on the Oneida Reservation. This project was an effort to develop a reasonable renewable energy portfolio that will help Oneida to provide a leadership role in developing a clean energy economy. The Energy Optimization Model (EOM) is an exploration of energy opportunities available to the Tribe and it is intended to provide a decision framework to allow the Tribe to make the wisest choices in energy investment with an organizational desire to establish a renewable portfolio standard (RPS).

  10. Optimal harvesting for a predator-prey agent-based model using difference equations.

    PubMed

    Oremland, Matthew; Laubenbacher, Reinhard

    2015-03-01

    In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.

  11. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  12. Kernel method based human model for enhancing interactive evolutionary optimization.

    PubMed

    Pei, Yan; Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly.

  13. The analysis of optimal singular controls for SEIR model of tuberculosis

    NASA Astrophysics Data System (ADS)

    Marpaung, Faridawaty; Rangkuti, Yulita M.; Sinaga, Marlina S.

    2014-12-01

    The optimally of singular control for SEIR model of Tuberculosis is analyzed. There are controls that correspond to time of the vaccination and treatment schedule. The optimally of singular control is obtained by differentiate a switching function of the model. The result shows that vaccination and treatment control are singular.

  14. Academic Optimism and Collective Responsibility: An Organizational Model of the Dynamics of Student Achievement

    ERIC Educational Resources Information Center

    Wu, Jason H.

    2013-01-01

    This study was designed to examine the construct of academic optimism and its relationship with collective responsibility in a sample of Taiwan elementary schools. The construct of academic optimism was tested using confirmatory factor analysis, and the whole structural model was tested with a structural equation modeling analysis. The data were…

  15. Optimal bispectrum constraints on single-field models of inflation

    SciTech Connect

    Anderson, Gemma J.; Regan, Donough; Seery, David E-mail: D.Regan@sussex.ac.uk

    2014-07-01

    We use WMAP 9-year bispectrum data to constrain the free parameters of an 'effective field theory' describing fluctuations in single-field inflation. The Lagrangian of the theory contains a finite number of operators associated with unknown mass scales. Each operator produces a fixed bispectrum shape, which we decompose into partial waves in order to construct a likelihood function. Based on this likelihood we are able to constrain four linearly independent combinations of the mass scales. As an example of our framework we specialize our results to the case of 'Dirac-Born-Infeld' and 'ghost' inflation and obtain the posterior probability for each model, which in Bayesian schemes is a useful tool for model comparison. Our results suggest that DBI-like models with two or more free parameters are disfavoured by the data by comparison with single-parameter models in the same class.

  16. Optimization Model for Irrigation Planning in Heterogenous Area

    NASA Astrophysics Data System (ADS)

    Kangrang, Anongrit; Phumphan, Anujit; Chaleeraktrakoon, Chavalit

    This study proposes an allocation LP model that can take into account heterogeneity of land area. The divided scenario into several sub-areas based on suitable soil type for each crops was used to represent the heterogeneous character in term of water requirement and crop yield. The proposed model was applied to find the dry-season (January-May) crop pattern of the Nong Wei Irrigation Project which located in the Northeast Region of Thailand. The records of seasonal flow, requested areas, crop water requirements, evaporation and effective rainfalls of the project were used for this illustrative application. Results showed that the proposed LP model gave the optimum crop pattern with net seasonal profit which corresponding seasonal available water and required area. It provided the highest profit as compare to the existing LP model that considering homogeneous project. The obtained patterns of considering heterogeneity corresponded to the available land areas of the suitable soil type.

  17. Medical Evacuation and Treatment Capabilities Optimization Model (METCOM)

    DTIC Science & Technology

    2005-09-01

    1 B . HEALTH SERVICE SUPPORT (HSS) SYSTEM...A. MULTIPERIOD/INTER-TEMPORAL NETWORKS..............................25 B . EVACUATION...29 A. OBJECTIVES OF THE MODEL ................................................................29 B . STRUCTURE OF THE GENERAL

  18. Modeling Illicit Drug Use Dynamics and Its Optimal Control Analysis

    PubMed Central

    2015-01-01

    The global burden of death and disability attributable to illicit drug use, remains a significant threat to public health for both developed and developing nations. This paper presents a new mathematical modeling framework to investigate the effects of illicit drug use in the community. In our model the transmission process is captured as a social “contact” process between the susceptible individuals and illicit drug users. We conduct both epidemic and endemic analysis, with a focus on the threshold dynamics characterized by the basic reproduction number. Using our model, we present illustrative numerical results with a case study in Cape Town, Gauteng, Mpumalanga and Durban communities of South Africa. In addition, the basic model is extended to incorporate time dependent intervention strategies. PMID:26819625

  19. Cooperative recurrent modular neural networks for constrained optimization: a survey of models and applications

    PubMed Central

    2008-01-01

    Constrained optimization problems arise in a wide variety of scientific and engineering applications. Since several single recurrent neural networks when applied to solve constrained optimization problems for real-time engineering applications have shown some limitations, cooperative recurrent neural network approaches have been developed to overcome drawbacks of these single recurrent neural networks. This paper surveys in details work on cooperative recurrent neural networks for solving constrained optimization problems and their engineering applications, and points out their standing models from viewpoint of both convergence to the optimal solution and model complexity. We provide examples and comparisons to shown advantages of these models in the given applications. PMID:19003467

  20. Optimal control for a tuberculosis model with reinfection and post-exposure interventions.

    PubMed

    Silva, Cristiana J; Torres, Delfim F M

    2013-08-01

    We apply optimal control theory to a tuberculosis model given by a system of ordinary differential equations. Optimal control strategies are proposed to minimize the cost of interventions, considering reinfection and post-exposure interventions. They depend on the parameters of the model and reduce effectively the number of active infectious and persistent latent individuals. The time that the optimal controls are at the upper bound increase with the transmission coefficient. A general explicit expression for the basic reproduction number is obtained and its sensitivity with respect to the model parameters is discussed. Numerical results show the usefulness of the optimization strategies.

  1. Isogeometric Analysis for Topology Optimization with a Phase Field Model

    DTIC Science & Technology

    2011-09-01

    have been successfully considered to provide mathematical models for prob- lems in different disciplines; for example, there are models for crack...since ρ can assume intermediate values between 0 (void) and 1 (material). However, these situations, even if consistent with the mathematical ...dimensional similitude can be applied. Also, it is possible to reduce the parametric dependence by setting D1 = 1 and hence choosing the

  2. Public Health Analysis Transport Optimization Model v. 1.0

    SciTech Connect

    Beyeler, Walt; Finley, Patrick; Walser, Alex; Frazier, Chris; Mitchell, Michael

    2016-10-05

    PHANTOM models logistic functions of national public health systems. The system enables public health officials to visualize and coordinate options for public health surveillance, diagnosis, response and administration in an integrated analytical environment. Users may simulate and analyze system performance applying scenarios that represent current conditions or future contingencies what-if analyses of potential systemic improvements. Public health networks are visualized as interactive maps, with graphical displays of relevant system performance metrics as calculated by the simulation modeling components.

  3. Optimal Estimation with Two Process Models and No Measurements

    DTIC Science & Technology

    2015-08-01

    example is provided in which a process model based on the dynamics of a ballistic projectile is blended with an inertial navigation system. The...inertial navigation system. The results show that under the certain conditions, the algorithm provides estimates of the projectile states with less...with such systems. This problem could present itself in navigation applications. In the example used here, one process model comes from projectile

  4. Optimization-driven identification of genetic perturbations accelerates the convergence of model parameters in ensemble modeling of metabolic networks.

    PubMed

    Zomorrodi, Ali R; Lafontaine Rivera, Jimmy G; Liao, James C; Maranas, Costas D

    2013-09-01

    The ensemble modeling (EM) approach has shown promise in capturing kinetic and regulatory effects in the modeling of metabolic networks. Efficacy of the EM procedure relies on the identification of model parameterizations that adequately describe all observed metabolic phenotypes upon perturbation. In this study, we propose an optimization-based algorithm for the systematic identification of genetic/enzyme perturbations to maximally reduce the number of models retained in the ensemble after each round of model screening. The key premise here is to design perturbations that will maximally scatter the predicted steady-state fluxes over the ensemble parameterizations. We demonstrate the applicability of this procedure for an Escherichia coli metabolic model of central metabolism by successively identifying single, double, and triple enzyme perturbations that cause the maximum degree of flux separation between models in the ensemble. Results revealed that optimal perturbations are not always located close to reaction(s) whose fluxes are measured, especially when multiple perturbations are considered. In addition, there appears to be a maximum number of simultaneous perturbations beyond which no appreciable increase in the divergence of flux predictions is achieved. Overall, this study provides a systematic way of optimally designing genetic perturbations for populating the ensemble of models with relevant model parameterizations.

  5. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  6. A Hydro System Modeling Hierarchy to Optimize the Operation of the BC Hydroelectric System

    NASA Astrophysics Data System (ADS)

    Shawwash, Z.

    2012-12-01

    We present the Hydro System Modeling Hierarchy that we have developed to optimize the operation of the BC Hydro system in British Columbia, Canada. The Hierarchy consists of a number of simulation and optimization models that we have developed over the past twelve years in a research program under the Grant-in-Aid Agreement between BC Hydro and the Department of Civil Engineering at UBC. We first provide an overview of the BC Hydro system and then present our modeling framework and discuss a number of optimization modeling tools that we have developed and are currently in use at BC Hydro and we briefly outline ongoing research and model development work supported by BC Hydro and leveraged by a Natural Sciences and Engineering Research Council's (NSERC) Collaborative Research and Development (CRD) grants.he BC Hydro System Optimization Modeling Hierarchy

  7. Modeling multiple experiments using regularized optimization: A case study on bacterial glucose utilization dynamics.

    PubMed

    Hartmann, András; Lemos, João M; Vinga, Susana

    2015-08-01

    The aim of inverse modeling is to capture the systems׳ dynamics through a set of parameterized Ordinary Differential Equations (ODEs). Parameters are often required to fit multiple repeated measurements or different experimental conditions. This typically leads to a multi-objective optimization problem that can be formulated as a non-convex optimization problem. Modeling of glucose utilization of Lactococcus lactis bacteria is considered using in vivo Nuclear Magnetic Resonance (NMR) measurements in perturbation experiments. We propose an ODE model based on a modified time-varying exponential decay that is flexible enough to model several different experimental conditions. The starting point is an over-parameterized non-linear model that will be further simplified through an optimization procedure with regularization penalties. For the parameter estimation, a stochastic global optimization method, particle swarm optimization (PSO) is used. A regularization is introduced to the identification, imposing that parameters should be the same across several experiments in order to identify a general model. On the remaining parameter that varies across the experiments a function is fit in order to be able to predict new experiments for any initial condition. The method is cross-validated by fitting the model to two experiments and validating the third one. Finally, the proposed model is integrated with existing models of glycolysis in order to reconstruct the remaining metabolites. The method was found useful as a general procedure to reduce the number of parameters of unidentifiable and over-parameterized models, thus supporting feature selection methods for parametric models.

  8. Process Cost Modeling for Multi-Disciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Bao, Han P.; Freeman, William (Technical Monitor)

    2002-01-01

    For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to

  9. Optimizing modelling in iterative image reconstruction for preclinical pinhole PET

    NASA Astrophysics Data System (ADS)

    Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.

    2016-05-01

    The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.

  10. On the model-based optimization of secreting mammalian cell (GS-NS0) cultures.

    PubMed

    Kiparissides, A; Pistikopoulos, E N; Mantalaris, A

    2015-03-01

    The global bio-manufacturing industry requires improved process efficiency to satisfy the increasing demands for biochemicals, biofuels, and biologics. The use of model-based techniques can facilitate the reduction of unnecessary experimentation and reduce labor and operating costs by identifying the most informative experiments and providing strategies to optimize the bioprocess at hand. Herein, we investigate the potential of a research methodology that combines model development, parameter estimation, global sensitivity analysis, and selection of optimal feeding policies via dynamic optimization methods to improve the efficiency of an industrially relevant bioprocess. Data from a set of batch experiments was used to estimate values for the parameters of an unstructured model describing monoclonal antibody (mAb) production in GS-NS0 cell cultures. Global Sensitivity Analysis (GSA) highlighted parameters with a strong effect on the model output and data from a fed-batch experiment were used to refine their estimated values. Model-based optimization was used to identify a feeding regime that maximized final mAb titer. An independent fed-batch experiment was conducted to validate both the results of the optimization and the predictive capabilities of the developed model. The successful integration of wet-lab experimentation and mathematical model development, analysis, and optimization represents a unique, novel, and interdisciplinary approach that addresses the complicated research and industrial problem of model-based optimization of cell based processes.

  11. Reproducing Phenomenology of Peroxidation Kinetics via Model Optimization

    NASA Astrophysics Data System (ADS)

    Ruslanov, Anatole D.; Bashylau, Anton V.

    2010-06-01

    We studied mathematical modeling of lipid peroxidation using a biochemical model system of iron (II)-ascorbate-dependent lipid peroxidation of rat hepatocyte mitochondrial fractions. We found that antioxidants extracted from plants demonstrate a high intensity of peroxidation inhibition. We simplified the system of differential equations that describes the kinetics of the mathematical model to a first order equation, which can be solved analytically. Moreover, we endeavor to algorithmically and heuristically recreate the processes and construct an environment that closely resembles the corresponding natural system. Our results demonstrate that it is possible to theoretically predict both the kinetics of oxidation and the intensity of inhibition without resorting to analytical and biochemical research, which is important for cost-effective discovery and development of medical agents with antioxidant action from the medicinal plants.

  12. Optimized continuous pharmaceutical manufacturing via model-predictive control.

    PubMed

    Rehrl, Jakob; Kruisz, Julia; Sacher, Stephan; Khinast, Johannes; Horn, Martin

    2016-08-20

    This paper demonstrates the application of model-predictive control to a feeding blending unit used in continuous pharmaceutical manufacturing. The goal of this contribution is, on the one hand, to highlight the advantages of the proposed concept compared to conventional PI-controllers, and, on the other hand, to present a step-by-step guide for controller synthesis. The derivation of the required mathematical plant model is given in detail and all the steps required to develop a model-predictive controller are shown. Compared to conventional concepts, the proposed approach allows to conveniently consider constraints (e.g. mass hold-up in the blender) and offers a straightforward, easy to tune controller setup. The concept is implemented in a simulation environment. In order to realize it on a real system, additional aspects (e.g., state estimation, measurement equipment) will have to be investigated.

  13. Optimal Estimation of Phenological Crop Model Parameters for Rice (Oryza sativa)

    NASA Astrophysics Data System (ADS)

    Sharifi, H.; Hijmans, R. J.; Espe, M.; Hill, J. E.; Linquist, B.

    2015-12-01

    Crop phenology models are important components of crop growth models. In the case of phenology models, generally only a few parameters are calibrated and default cardinal temperatures are used which can lead to a temperature-dependent systematic phenology prediction error. Our objective was to evaluate different optimization approaches in the Oryza2000 and CERES-Rice phenology sub-models to assess the importance of optimizing cardinal temperatures on model performance and systematic error. We used two optimization approaches: the typical single-stage (planting to heading) and three-stage model optimization (for planting to panicle initiation (PI), PI to heading (HD), and HD to physiological maturity (MT)) to simultaneously optimize all model parameters. Data for this study was collected over three years and six locations on seven California rice cultivars. A temperature-dependent systematic error was found for all cultivars and stages, however it was generally small (systematic error < 2.2). Both optimization approaches in both models resulted in only small changes in cardinal temperature relative to the default values and thus optimization of cardinal temperatures did not affect systematic error or model performance. Compared to single stage optimization, three-stage optimization had little effect on determining time to PI or HD but significantly improved the precision in determining the time from HD to MT: the RMSE reduced from an average of 6 to 3.3 in Oryza2000 and from 6.6 to 3.8 in CERES-Rice. With regards to systematic error, we found a trade-off between RMSE and systematic error when optimization objective set to minimize RMSE or systematic error. Therefore, it is important to find the limits within which the trade-offs between RMSE and systematic error are acceptable, especially in climate change studies where this can prevent erroneous conclusions.

  14. A Space-Time Flow Optimization Model for Neighborhood Evacuation

    DTIC Science & Technology

    2010-03-01

    We model the minimum cost evacuation behavior through time with formulation SPACETIME below. Index Sets i L Locations (alias j) t T...and ensures that there are no negative flows. C. THE MISSION CANYON EXAMPLE We apply model SPACETIME to the Mission Canyon neighborhood. We use a...11:00 1000 21:54 19:10 15:10 19:00 15:00 1200 26:53 22:50 21:40 22:40 21:40 1400 32:45 28:20 28:20 28:10 28:20 Vital Report SPACETIME SPACETIME

  15. The Search for "Optimal" Cutoff Properties: Fit Index Criteria in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Sivo, Stephen A.; Xitao, Fan; Witta, E. Lea; Willse, John T.

    2006-01-01

    This study is a partial replication of L. Hu and P. M. Bentler's (1999) fit criteria work. The purpose of this study was twofold: (a) to determine whether cut-off values vary according to which model is the true population model for a dataset and (b) to identify which of 13 fit indexes behave optimally by retaining all of the correct models while…

  16. A Linear Programming Model to Optimize Various Objective Functions of a Foundation Type State Support Program.

    ERIC Educational Resources Information Center

    Matzke, Orville R.

    The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…

  17. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST

  18. 3D head model classification using optimized EGI

    NASA Astrophysics Data System (ADS)

    Tong, Xin; Wong, Hau-san; Ma, Bo

    2006-02-01

    With the general availability of 3D digitizers and scanners, 3D graphical models have been used widely in a variety of applications. This has led to the development of search engines for 3D models. Especially, 3D head model classification and retrieval have received more and more attention in view of their many potential applications in criminal identifications, computer animation, movie industry and medical industry. This paper addresses the 3D head model classification problem using 2D subspace analysis methods such as 2D principal component analysis (2D PCA[3]) and 2D fisher discriminant analysis (2DLDA[5]). It takes advantage of the fact that the histogram is a 2D image, and we can extract the most useful information from these 2D images to get a good result accordingingly. As a result, there are two main advantages: First, we can perform less calculation to obtain the same rate of classification; second, we can reduce the dimensionality more than PCA to obtain a higher efficiency.

  19. Optimizing technology investments: a broad mission model approach

    NASA Technical Reports Server (NTRS)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  20. Optimized feed-forward neural-network algorithm trained for cyclotron-cavity modeling

    NASA Astrophysics Data System (ADS)

    Mohamadian, Masoumeh; Afarideh, Hossein; Ghergherehchi, Mitra

    2017-01-01

    The cyclotron cavity presented in this paper is modeled by a feed-forward neural network trained by the authors’ optimized back-propagation (BP) algorithm. The training samples were obtained from simulation results that are for a number of defined situations and parameters and were achieved parametrically using MWS CST software; furthermore, the conventional BP algorithm with different hidden-neuron numbers, structures, and other optimal parameters such as learning rate that are applied for our purpose was also used here. The present study shows that an optimized FFN can be used to estimate the cyclotron-model parameters with an acceptable error function. A neural network trained by an optimized algorithm therefore shows a proper approximation and an acceptable ability regarding the modeling of the proposed structure. The cyclotron-cavity parameter-modeling results demonstrate that an FNN that is trained by the optimized algorithm could be a suitable method for the estimation of the design parameters in this case.

  1. Forest and Agricultural Sector Optimization Model (FASOM): Model structure and policy applications. Forest Service research paper

    SciTech Connect

    Adams, D.M.; Alig, R.J.; Callaway, J.M.; McCarl, B.A.; Winnett, S.M.

    1996-09-01

    The Forest and Agricultural Sector Opimization Model (FASOM) is a dynamic, nonlinear programming model of the forest and agricultural sectors in the United States. The FASOM model initially was developed to evaluate welfare and market impacts of alternative policies for sequestering carbon in trees but also has been applied to a wider range of forest and agricultural sector policy scenarios. The authors describe the model structure and give selected examples of policy applications. A summary of the data sources, input data file format, and the methods used to develop the input data files also are provided.

  2. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  3. Optimizing for minimum weight when two different finite element models and analyses are required

    NASA Technical Reports Server (NTRS)

    Hall, Jeffrey C.

    1989-01-01

    The Finite Element Structural Optimization Program's (FESOP) ability to perform minimum weight optimization using two different finite element analyses and models is discussed. FESOP uses the ADS optimizer developed by Dr. Garret Vanderplaats to solve the nonlinear constrained optimization problem. The design optimization problem requires a response spectrum analysis and model to evaluate the stress and displacement constraints. However, the problem needs a frequency analysis and model to calculate the natural frequencies used to evaluate the frequency range constraints. The results of both the successful and unsuccessful approaches used to solve this difficult weight minimization problem are summarized. The results show that no one ADS optimization algorithm worked in all cases. However, the Sequential Convex Programming and Modified Method of Feasible Directions algorithms were the most successful.

  4. Delay Differential Model for Tumour-Immune Response with Chemoimmunotherapy and Optimal Control

    PubMed Central

    Rihan, F. A.; Abdelrahman, D. H.; Al-Maskari, F.; Ibrahim, F.; Abdeen, M. A.

    2014-01-01

    We present a delay differential model with optimal control that describes the interactions of the tumour cells and immune response cells with external therapy. The intracellular delay is incorporated into the model to justify the time required to stimulate the effector cells. The optimal control variables are incorporated to identify the best treatment strategy with minimum side effects by blocking the production of new tumour cells and keeping the number of normal cells above 75% of its carrying capacity. Existence of the optimal control pair and optimality system are established. Pontryagin's maximum principle is applicable to characterize the optimal controls. The model displays a tumour-free steady state and up to three coexisting steady states. The numerical results show that the optimal treatment strategies reduce the tumour cells load and increase the effector cells after a few days of therapy. The performance of combination therapy protocol of immunochemotherapy is better than the standard protocol of chemotherapy alone. PMID:25197319

  5. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    PubMed

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.

  6. Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.

    PubMed

    Fintelman, D M; Sterling, M; Hemida, H; Li, F-X

    2014-06-03

    The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position.

  7. Environmental optimal control strategies based on plant canopy photosynthesis responses and greenhouse climate model

    NASA Astrophysics Data System (ADS)

    Deng, Lujuan; Xie, Songhe; Cui, Jiantao; Liu, Tao

    2006-11-01

    It is the essential goal of intelligent greenhouse environment optimal control to enhance income of cropper and energy save. There were some characteristics such as uncertainty, imprecision, nonlinear, strong coupling, bigger inertia and different time scale in greenhouse environment control system. So greenhouse environment optimal control was not easy and especially model-based optimal control method was more difficult. So the optimal control problem of plant environment in intelligent greenhouse was researched. Hierarchical greenhouse environment control system was constructed. In the first level data measuring was carried out and executive machine was controlled. Optimal setting points of climate controlled variable in greenhouse was calculated and chosen in the second level. Market analysis and planning were completed in third level. The problem of the optimal setting point was discussed in this paper. Firstly the model of plant canopy photosynthesis responses and the model of greenhouse climate model were constructed. Afterwards according to experience of the planting expert, in daytime the optimal goals were decided according to the most maximal photosynthesis rate principle. In nighttime on plant better growth conditions the optimal goals were decided by energy saving principle. Whereafter environment optimal control setting points were computed by GA. Compared the optimal result and recording data in real system, the method is reasonable and can achieve energy saving and the maximal photosynthesis rate in intelligent greenhouse

  8. Mathematical modeling and optimization of cellulase protein production using Trichoderma reesei RL-P37

    SciTech Connect

    Tholudur, A.; Ramirez, W.F.; McMillan, J.D.

    1999-07-01

    The enzyme cellulase, a multienzyme complex made up of several proteins, catalyzes the conversion of cellulose to glucose in an enzymatic hydrolysis-based biomass-to-ethanol process. Production of cellulase enzyme proteins in large quantities using the fungus Trichoderma reesei requires understanding the dynamics of growth and enzyme production. The method of neural network parameter function modeling, which combines the approximation capabilities of neural networks with fundamental process knowledge, is utilized to develop a mathematical model of this dynamic system. In addition, kinetic models are also developed. Laboratory data from bench-scale fermentations involving growth and protein production by T. reesei on lactose and xylose are used to estimate the parameters in these models. The relative performance of the various models and the results of optimizing these models on two different performance measures are presented. An approximately 33% lower root-mean-squared error (RMSE) in protein predictions and about 40% lower total RMSE is obtained with the neural network-based model, the RMSE in predicting optimal conditions for two performance indices, is about 67% and 40% lower, respectively, when compared with the kinetic models. Thus, both model predictions and optimization results from the neural network-based model are found to be closer to the experimental data than the kinetic models developed in this work. It is shown that the neural network parameter function modeling method can be useful as a macromodeling technique to rapidly develop dynamic models of a process.

  9. Using models for the optimization of hydrologic monitoring

    USGS Publications Warehouse

    Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.

    2011-01-01

    Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The

  10. KL-optimal experimental design for discriminating between two growth models applied to a beef farm.

    PubMed

    Campos-Barreiro, Santiago; López-Fidalgo, Jesús

    2016-02-01

    The body mass growth of organisms is usually represented in terms of what is known as ontogenetic growth models, which represent the relation of dependence between the mass of the body and time. The paper is concerned with a problem of finding an optimal experimental design for discriminating between two competing mass growth models applied to a beef farm. T-optimality was first introduced for discrimination between models but in this paper, KL-optimality based on the Kullback-Leibler distance is used to deal with correlated obsevations since, in this case, observations on a particular animal are not independent.

  11. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  12. Global stability and optimal control of an SIRS epidemic model on heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Chen, Lijuan; Sun, Jitao

    2014-09-01

    In this paper, we consider an SIRS epidemic model with vaccination on heterogeneous networks. By constructing suitable Lyapunov functions, global stability of the disease-free equilibrium and the endemic equilibrium of the model is investigated. Also we firstly study an optimally controlled SIRS epidemic model on complex networks. We show that an optimal control exists for the control problem. Finally some examples are presented to show the global stability and the efficiency of this optimal control. These results can help in adopting pragmatic treatment upon diseases in structured populations.

  13. A linked simulation-optimization model for solving the unknown groundwater pollution source identification problems.

    PubMed

    Ayvaz, M Tamer

    2010-09-20

    This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems.

  14. The application of temporal difference learning in optimal diet models.

    PubMed

    Teichmann, Jan; Broom, Mark; Alonso, Eduardo

    2014-01-07

    An experience-based aversive learning model of foraging behaviour in uncertain environments is presented. We use Q-learning as a model-free implementation of Temporal difference learning motivated by growing evidence for neural correlates in natural reinforcement settings. The predator has the choice of including an aposematic prey in its diet or to forage on alternative food sources. We show how the predator's foraging behaviour and energy intake depend on toxicity of the defended prey and the presence of Batesian mimics. We introduce the precondition of exploration of the action space for successful aversion formation and show how it predicts foraging behaviour in the presence of conflicting rewards which is conditionally suboptimal in a fixed environment but allows better adaptation in changing environments.

  15. Experimental analysis of chaotic neural network models for combinatorial optimization under a unifying framework.

    PubMed

    Kwok, T; Smith, K A

    2000-09-01

    The aim of this paper is to study both the theoretical and experimental properties of chaotic neural network (CNN) models for solving combinatorial optimization problems. Previously we have proposed a unifying framework which encompasses the three main model types, namely, Chen and Aihara's chaotic simulated annealing (CSA) with decaying self-coupling, Wang and Smith's CSA with decaying timestep, and the Hopfield network with chaotic noise. Each of these models can be represented as a special case under the framework for certain conditions. This paper combines the framework with experimental results to provide new insights into the effect of the chaotic neurodynamics of each model. By solving the N-queen problem of various sizes with computer simulations, the CNN models are compared in different parameter spaces, with optimization performance measured in terms of feasibility, efficiency, robustness and scalability. Furthermore, characteristic chaotic neurodynamics crucial to effective optimization are identified, together with a guide to choosing the corresponding model parameters.

  16. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    SciTech Connect

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    2016-04-01

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller that adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution

  17. Modeling, Analysis, and Optimization Issues for Large Space Structures.

    DTIC Science & Technology

    1983-02-01

    Anderson, M. S.; and Greene , W. H.: Continuum Models for Beam- and Plate-Like Lattice Structures: AIAA J., vol. 16, no. 12, Dec. 1978, pp. 1219-1228...Workshop on Applications of Adaptive Control, Yale University, New Haven, Ct., August 1979. 3. Ljung, L., Sbderstr6m, T., and Gustavsson , I...Produce Unstable Feedback Controllers." nt. J. Control, Vol. 29, No. 4, April 1979, pp. 607-620. 29. Green , C., and Stein, G., "Inherent Damping

  18. Bottom friction optimization for a better barotropic tide modelling

    NASA Astrophysics Data System (ADS)

    Boutet, Martial; Lathuilière, Cyril; Son Hoang, Hong; Baraille, Rémy

    2015-04-01

    At a regional scale, barotropic tides are the dominant source of variability of currents and water heights. A precise representation of these processes is essential because of their great impacts on human activities (submersion risks, marine renewable energies, ...). Identified sources of error for tide modelling at a regional scale are the followings: bathymetry, boundary forcing and dissipation due to bottom friction. Nevertheless, bathymetric databases are nowadays known with a good accuracy, especially over shelves, and global tide models performances are better than ever. The most promising improvement is thus the bottom friction representation. The method used to estimate bottom friction is the simultaneous perturbation stochastic approximation (SPSA) which consists in the approximation of the gradient based on a fixed number of cost function measurements, regardless of the dimension of the vector to be estimated. Indeed, each cost function measurement is obtained by randomly perturbing every component of the parameter vector. An important feature of SPSA is its relative ease of implementation. In particular, the method does not require the development of tangent linear and adjoint version of the circulation model. Experiments are carried out to estimate bottom friction with the HYbrid Coordinate Ocean Model (HYCOM) in barotropic mode (one isopycnal layer). The study area is the Northeastern Atlantic margin which is characterized by strong currents and an intense dissipation. Bottom friction is parameterized with a quadratic term and friction coefficient is computed with the water height and the bottom roughness. The latter parameter is the one to be estimated. Assimilated data are the available tide gauge observations. First, the bottom roughness is estimated taking into account bottom sediment natures and bathymetric ranges. Then, it is estimated with geographical degrees of freedom. Finally, the impact of the estimation of a mixed quadratic/linear friction

  19. Engineering models for merging wakes in wind farm optimization applications

    NASA Astrophysics Data System (ADS)

    Machefaux, E.; Larsen, G. C.; Murcia Leon, J. P.

    2015-06-01

    The present paper deals with validation of 4 different engineering wake superposition approaches against detailed CFD simulations and covering different turbine interspacing, ambient turbulence intensities and mean wind speeds. The first engineering model is a simple linear superposition of wake deficits as applied in e.g. Fuga. The second approach is the square root of sums of squares approach, which is applied in the widely used PARK program. The third approach, which is presently used with the Dynamic Wake Meandering (DWM) model, assumes that the wake affected downstream flow field to be determined by a superposition of the ambient flow field and the dominating wake among contributions from all upstream turbines at any spatial position and at any time. The last approach developed by G.C. Larsen is a newly developed model based on a parabolic type of approach, which combines wake deficits successively. The study indicates that wake interaction depends strongly on the relative wake deficit magnitude, i.e. the deficit magnitude normalized with respect to the ambient mean wind speed, and that the dominant wake assumption within the DWM framework is the most accurate.

  20. Electric/hybrid vehicle model for establishing optimal battery requirements

    NASA Astrophysics Data System (ADS)

    Marr, W. W.; Walsh, W. J.

    1986-04-01

    A microcomputer program (HELEN) for establishing battery requirements for a heat engine/battery hybrid vehicle is described. The program permits least-cost analyses to identify the optimum combination of battery and heat engine characteristics for different vehicle types and missions. It can also be used for cost comparisons between heat-engine vehicles, all-electric (battery) vehicles, and hybrid vehicles. Simplified models are used for the transmission, motor/generator, controller, and other vehicle components, while a rather comprehensive model is employed for the battery. The heat engine performance model is based on engineering data for a production engine. A series/parallel configuration for the hybrid vehicle system is presently simulated. Energy management in the operation of the vehicle depends on the specified mission requirements, type and size of the battery, allowable battery depth of discharge, type and size of the heat engine, and the energy management strategy used. The program is written in PL/I language and can be run interactively on an IBM PC, COMPAQ, or other compatible microcomputer.

  1. Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Alder, J.; van Griensven, A.; Meixner, T.

    2003-12-01

    Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.

  2. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  3. CVXPY: A Python-Embedded Modeling Language for Convex Optimization

    PubMed Central

    Diamond, Steven; Boyd, Stephen

    2016-01-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples. PMID:27375369

  4. CVXPY: A Python-Embedded Modeling Language for Convex Optimization.

    PubMed

    Diamond, Steven; Boyd, Stephen

    2016-04-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.

  5. Path Optimization for Single and Multiple Searchers: Models and Algorithms

    DTIC Science & Technology

    2008-09-01

    the k-th it- eration of Algorithm 11, the master problem MP4 (k) defined below is solved. The optimal value and optimal solution of MP4 (k) are denoted z...k) and y(k), respectively. In each iteration of Algorithm 11, U cuts are generated at once. Formulation of Master problem : MP4 (k) min z = ∑U u=1...master problem MP4 (k), and obtain its optimal value z(k) and optimal solution y(k). If z(k) > q, then q = z(k). Step 3. Calculate fu(y (k)) and fu(y (k

  6. Combining multi-objective optimization and bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    SciTech Connect

    Vrugt, Jasper A; Wohling, Thomas

    2008-01-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  7. Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Safi-Harb, Samar; Fiege, Jason

    2015-08-01

    The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.

  8. Optimality models of phage life history and parallels in disease evolution.

    PubMed

    Bull, J J

    2006-08-21

    Optimality models constitute one of the simplest approaches to understanding phenotypic evolution. Yet they have shortcomings that are not easily evaluated in most organisms. Most importantly, the genetic basis of phenotype evolution is almost never understood, and phenotypic selection experiments are rarely possible. Both limitations can be overcome with bacteriophages. However, phages have such elementary life histories that few phenotypes seem appropriate for optimality approaches. Here we develop optimality models of two phage life history traits, lysis time and host range. The lysis time models show that the optimum is less sensitive to differences in host density than suggested by earlier analytical work. Host range evolution is approached from the perspective of whether the virus should avoid particular hosts, and the results match optimal foraging theory: there is an optimal "diet" in which host types are either strictly included or excluded, depending on their infection qualities. Experimental tests of both models are feasible, and phages provide concrete illustrations of many ways that optimality models can guide understanding and explanation. Phage genetic systems already support the perspective that lysis time and host range can evolve readily and evolve without greatly affecting other traits, one of the main tenets of optimality theory. The models can be extended to more general properties of infection, such as the evolution of virulence and tissue tropism.

  9. Optimization of global model composed of radial basis functions using the term-ranking approach

    SciTech Connect

    Cai, Peng; Tao, Chao Liu, Xiao-Jun

    2014-03-15

    A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.

  10. Using a Model to Compute the Optimal Schedule of Practice

    ERIC Educational Resources Information Center

    Pavlik, Philip I.; Anderson, John R.

    2008-01-01

    By balancing the spacing effect against the effects of recency and frequency, this paper explains how practice may be scheduled to maximize learning and retention. In an experiment, an optimized condition using an algorithm determined with this method was compared with other conditions. The optimized condition showed significant benefits with…

  11. Continuously Optimized Reliable Energy (CORE) Microgrid: Models & Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2013-07-01

    This brochure describes Continuously Optimized Reliable Energy (CORE), a trademarked process NREL employs to produce conceptual microgrid designs. This systems-based process enables designs to be optimized for economic value, energy surety, and sustainability. Capabilities NREL offers in support of microgrid design are explained.

  12. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    NASA Astrophysics Data System (ADS)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  13. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE PAGES

    Xi, Maolong; Lu, Dan; Gui, Dongwei; ...

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  14. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    SciTech Connect

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  15. Sheet metal forming optimization by using surrogate modeling techniques

    NASA Astrophysics Data System (ADS)

    Wang, Hu; Ye, Fan; Chen, Lei; Li, Enying

    2017-01-01

    Surrogate assisted optimization has been widely applied in sheet metal forming design due to its efficiency. Therefore, to improve the efficiency of design and reduce the product development cycle, it is important for scholars and engineers to have some insight into the performance of each surrogate assisted optimization method and make them more flexible practically. For this purpose, the state-of-the-art surrogate assisted optimizations are investigated. Furthermore, in view of the bottleneck and development of the surrogate assisted optimization and sheet metal forming design, some important issues on the surrogate assisted optimization in support of the sheet metal forming design are analyzed and discussed, involving the description of the sheet metal forming design, off-line and online sampling strategies, space mapping algorithm, high dimensional problems, robust design, some challenges and potential feasible methods. Generally, this paper provides insightful observations into the performance and potential development of these methods in sheet metal forming design.

  16. Optimal symmetric networks in terms of minimizing average shortest path length and their sub-optimal growth model

    NASA Astrophysics Data System (ADS)

    Xuan, Qi; Li, Yanjun; Wu, Tie-Jun

    2009-04-01

    Homogeneous entangled networks characterized by small world, large girths, and no community structure have attracted much attention due to some of their favorable performances. However, the optimization algorithm proposed by Donetti et al. is very time-consuming and will lose its efficiency when the size of the target network becomes large. In this paper, an alternative optimization algorithm is provided to get optimal symmetric networks by minimizing the average shortest path length. It is shown that the synchronizability of a symmetric network is enhanced when the average shortest path length of the network is shortened as the optimization proceeds, which suggests that the optimal symmetric networks in terms of minimizing average shortest path length will be very close to those entangled networks. In order to overcome the time-consuming obstacle of the optimization algorithms proposed by us and Donetti et al., a growth model is proposed to get large scale sub-optimal symmetric networks. Numerical simulations show that the symmetric networks derived by our growth model will have small-world property, and besides, these networks will have many other similar favorable performances as entangled networks, e.g., robustness against errors and attacks, very good load balancing ability, and strong synchronizability.

  17. Multiscale Modeling and Process Optimization for Engineered Microstructural Complexity

    DTIC Science & Technology

    2007-10-26

    2005. 25. R. T. Brewer , D. A. Boyd, M. Y. El-Naggar, S. W. Boland, Y.-B. Park, S. M. Haile, D. G. Goodwin, and H. A. Atwater, Growth of biaxially...Zhang and K. Bhattacharya. A computational model of ferroelectric domains. Part II: Grain boundaries and defect pinning. Acta Materialia 53: 199...electromechanical actuation Acta Materialia, 51, 5941-5960, 2003. 37. R.T. Brewer , H.A. Atwater, J.R. Groves and P.N. Arendt Reflection high-energy electron

  18. A Preliminary Ship Design Model for Cargo Throughput Optimization

    DTIC Science & Technology

    2014-06-01

    displacement weight [LT] cargo ship’s weight excluding fuel and propulsion system [LT] fuel weight of the fuel [LT] prop weight of the propulsion system ...9 where  is the total ship’s displacement, prop is the weight of the propulsion system , fuel is the weight of the fuel, and cargo is the weight...prediction using spreadsheet model,” Kobe University of Mercantile Marine, Kobe, Japan, 2003. [3] C. B. McKesson, “A parametric method for

  19. Vaccination models and optimal control strategies to dengue.

    PubMed

    Rodrigues, Helena Sofia; Monteiro, M Teresa T; Torres, Delfim F M

    2014-01-01

    As the development of a dengue vaccine is ongoing, we simulate an hypothetical vaccine as an extra protection to the population. In a first phase, the vaccination process is studied as a new compartment in the model, and different ways of distributing the vaccines investigated: pediatric and random mass vaccines, with distinct levels of efficacy and durability. In a second step, the vaccination is seen as a control variable in the epidemiological process. In both cases, epidemic and endemic scenarios are included in order to analyze distinct outbreak realities.

  20. Refinements to an Optimized Model-Driven Bathymetry Deduction Algorithm

    DTIC Science & Technology

    2001-09-01

    bathymetric deduction algorithm, we used the Korteweg - deVries (KdV) equation ( Korteweg and deVries 1895) as the wave model. Throughout this study, we will be...technique is explained in an appendix of the manuscript. In the interest of brevity, we simply write the matrix equation to be solved : ηµ ∆+=∆ TTh...the wavelength). Bell (1999) used phase speeds calculated from X-band radar imagery and Equation (1) to infer the bathymetry, with favorable

  1. Optimal measurement model is crucial to identify distinct constructs.

    PubMed

    Gandhi, Pranav

    2010-06-01

    This is a Brief Commentary in response to article-"Steinsbekk, S., Jozefiak, T., Ødegård, R., & Wichstrøm, L. (2009). Impaired parent-reported quality of life in treatment-seeking children with obesity is mediated with high levels of psychopathology. Quality of Life Research, 18(9), 1159-1167. doi: 10.1007/s11136-009-9535-6 ." The commentary states that the investigation of the hypothesis if quality of life and psychopathology are two separate constructs may have been hampered by the use of a suboptimal measurement model.

  2. Modeling of Euclidean braided fiber architectures to optimize composite properties

    NASA Technical Reports Server (NTRS)

    Armstrong-Carroll, E.; Pastore, C.; Ko, F. K.

    1992-01-01

    Three-dimensional braided fiber reinforcements are a very effective toughening mechanism for composite materials. The integral yarn path inherent to this fiber architecture allows for effective multidirectional dispersion of strain energy and negates delamination problems. In this paper a geometric model of Euclidean braid fiber architectures is presented. This information is used to determine the degree of geometric isotropy in the braids. This information, when combined with candidate material properties, can be used to quickly generate an estimate of the available load-carrying capacity of Euclidean braids at any arbitrary angle.

  3. Optimal Tuning for Disturbance Suppression Mechanism for Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Tange, Yoshio; Nakazawa, Chikashi

    Disturbance suppression is one of most required performances in process control. We recently proposed a new disturbance suppression mechanism applicable for model predictive control in order to enhance disturbance suppression performance for ramp-like disturbances. The proposed method utilized the prediction error of controlled values and generates a disturbance compensation signal by a constant gain feedback. In this paper, we propose an improved version of the disturbance suppression mechanism by applying a low-pass filter and parameter tuning methods by which we can make the mechanism more tolerant to various disturbances such as ramp, step, and other supposable ones. We also show numerical simulation results with an oil distillation tower plant.

  4. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    PubMed

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility.

  5. Ant colony optimization algorithm for continuous domains based on position distribution model of ant colony foraging.

    PubMed

    Liu, Liqiang; Dai, Yuntao; Gao, Jinyu

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm.

  6. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    PubMed Central

    He, Shi-wei; Song, Rui; Sun, Yang; Li, Hao-dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable. PMID:25435867

  7. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    PubMed

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  8. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  9. Performance Optimizing Multi-Objective Adaptive Control with Time-Varying Model Reference Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The problem is cast as a multi-objective optimal control. The control synthesis involves the design of a performance optimizing controller from a subset of control inputs. The effect of the performance optimizing controller is to introduce an uncertainty into the system that can degrade tracking of the reference model. An adaptive controller from the remaining control inputs is designed to reduce the effect of the uncertainty while maintaining a notion of performance optimization in the adaptive control system.

  10. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    DTIC Science & Technology

    2015-09-12

    free setting. We extensively tested our software on a complex problem of protein alignment. The ridge regression models did not produce a noticeable...AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...12-09-2015 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15-08-2011 to 14-08-2014 4. TITLE AND SUBTITLE DERIVATIVE FREE OPTIMIZATION OF

  11. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  12. Bayesian Models of Cognition Revisited: Setting Optimality Aside and Letting Data Drive Psychological Theory.

    PubMed

    Tauber, Sean; Navarro, Daniel J; Perfors, Amy; Steyvers, Mark

    2017-03-30

    Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record

  13. Locating monitoring wells in groundwater systems using embedded optimization and simulation models.

    PubMed

    Bashi-Azghadi, Seyyed Nasser; Kerachian, Reza

    2010-04-15

    In this paper, a new methodology is proposed for optimally locating monitoring wells in groundwater systems in order to identify an unknown pollution source using monitoring data. The methodology is comprised of two different single and multi-objective optimization models, a Monte Carlo analysis, MODFLOW, MT3D groundwater quantity and quality simulation models and a Probabilistic Support Vector Machine (PSVM). The single-objective optimization model, which uses the results of the Monte Carlo analysis and maximizes the reliability of contamination detection, provides the initial location of monitoring wells. The objective functions of the multi-objective optimization model are minimizing the monitoring cost, i.e. the number of monitoring wells, maximizing the reliability of contamination detection and maximizing the probability of detecting an unknown pollution source. The PSVMs are calibrated and verified using the results of the single-objective optimization model and the Monte Carlo analysis. Then, the PSVMs are linked with the multi-objective optimization model, which maximizes both the reliability of contamination detection and probability of detecting an unknown pollution source. To evaluate the efficiency and applicability of the proposed methodology, it is applied to Tehran Refinery in Iran.

  14. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  15. Optimized Equivalent Staggered-grid FD Method for Elastic Wave Modeling Based on Plane Wave Solutions

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Huang, Jianping; Li, Zhenchun; Liao, Wenyuan; Qu, Luping; Li, Qingyang; Liu, Peijun

    2016-12-01

    In finite difference (FD) method, numerical dispersion is the dominant factor influencing the accuracy of seismic modeling. Various optimized FD schemes for scalar wave modeling have been proposed to reduce grid dispersion, while the optimized time-space domain FD schemes for elastic wave modeling have not been fully investigated yet. In this paper, an optimized FD scheme with Equivalent Staggered Grid (ESG) for elastic modelling has been developed. We start from the constant P- and S-wave speed elastic wave equations and then deduce analytical plane wave solutions in the wavenumber domain with eigenvalue decomposition method. Based on the elastic plane wave solutions, three new time-space domain dispersion relations of ESG elastic modeling are obtained, which are represented by three equations corresponding to P-, S- and converted wave terms in the elastic equations, respectively. By using these new relations, we can study the dispersion errors of different spatial FD terms independently. The dispersion analysis showed that different spatial FD terms have different errors. It is therefore suggested that different FD coefficients to be used to approximate the three spatial derivative terms. In addition, the relative dispersion error in L2-norm is minimized through optimizing FD coefficients using Newton's method. Synthetic examples have demonstrated that this new optimal FD schemes have superior accuracy for elastic wave modeling compared to Taylor-series expansion and optimized space domain FD schemes.

  16. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  17. Optimized equivalent staggered-grid FD method for elastic wave modelling based on plane wave solutions

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Huang, Jianping; Li, Zhenchun; Liao, Wenyuan; Qu, Luping; Li, Qingyang; Liu, Peijun

    2017-02-01

    In finite-difference (FD) method, numerical dispersion is the dominant factor influencing the accuracy of seismic modelling. Various optimized FD schemes for scalar wave modelling have been proposed to reduce grid dispersion, while the optimized time-space domain FD schemes for elastic wave modelling have not been fully investigated yet. In this paper, an optimized FD scheme with Equivalent Staggered Grid (ESG) for elastic modelling has been developed. We start from the constant P- and S-wave speed elastic wave equations and then deduce analytical plane wave solutions in the wavenumber domain with eigenvalue decomposition method. Based on the elastic plane wave solutions, three new time-space domain dispersion relations of ESG elastic modelling are obtained, which are represented by three equations corresponding to P-, S- and converted-wave terms in the elastic equations, respectively. By using these new relations, we can study the dispersion errors of different spatial FD terms independently. The dispersion analysis showed that different spatial FD terms have different errors. It is therefore suggested that different FD coefficients to be used to approximate the three spatial derivative terms. In addition, the relative dispersion error in L2-norm is minimized through optimizing FD coefficients using Newton's method. Synthetic examples have demonstrated that this new optimal FD schemes have superior accuracy for elastic wave modelling compared to Taylor-series expansion and optimized space domain FD schemes.

  18. FinFET Doping; Material Science, Metrology, and Process Modeling Studies for Optimized Device Performance

    SciTech Connect

    Duffy, R.; Shayesteh, M.

    2011-01-07

    In this review paper the challenges that face doping optimization in 3-dimensional (3D) thin-body silicon devices will be discussed, within the context of material science studies, metrology methodologies, process modeling insight, ultimately leading to optimized device performance. The focus will be on ion implantation at the method to introduce the dopants to the target material.

  19. What's in a Grammar? Modeling Dominance and Optimization in Contact

    ERIC Educational Resources Information Center

    Sharma, Devyani

    2013-01-01

    Muysken's article is a timely call for us to seek deeper regularities in the bewildering diversity of language contact outcomes. His model provocatively suggests that most such outcomes can be subsumed under four speaker optimization strategies. I consider two aspects of the proposal here: the formalization in Optimality Theory (OT) and the…

  20. Systematic optimization of a detailed kinetic model using a methane ignition example

    NASA Technical Reports Server (NTRS)

    Frenklach, M.

    1984-01-01

    An approach to the systematic optimization of a large-scale dynamic model is proposed which consists in parameterization of simulation results as response surfaces. The optimization procedure is carried out using a second-order orthogonal design. The approach proposed here is demonstrated by an example involving the shock-initiated ignition of methane.

  1. Conceptual modeling to optimize the haul and transfer of municipal solid waste.

    PubMed

    Komilis, D P

    2008-11-01

    Two conceptual mixed integer linear optimization models were developed to optimize the haul and transfer of municipal solid waste (MSW) prior to landfilling. One model is based on minimizing time (h/d), whilst the second model is based on minimizing total cost (euro/d). Both models aim to calculate the optimum pathway to haul MSW from source nodes (waste production nodes, such as urban centers or municipalities) to sink nodes (landfills) via intermediate nodes (waste transfer stations). The models are applicable provided that the locations of the source, intermediate and sink nodes are fixed. The basic input data are distances among nodes, average vehicle speeds, haul cost coefficients (in euro/ton km), equipment and facilities' operating and investment cost, labor cost and tipping fees. The time based optimization model is easier to develop, since it is based on readily available data (distances among nodes). It can be used in cases in which no transfer stations are included in the system. The cost optimization model is more reliable compared to the time model provided that accurate cost data are available. The cost optimization model can be a useful tool to optimally allocate waste transfer stations in a region and can aid a community to investigate the threshold distance to a landfill above which the construction of a transfer station becomes financially beneficial. A sensitivity analysis reveals that queue times at the landfill or at the waste transfer station are key input variables. In addition, the waste transfer station ownership and the initial cost data affect the optimum path. A case study at the Municipality of Athens is used to illustrate the presented models.

  2. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    SciTech Connect

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram; Baumgartner, Gerald; Ramanujam, J.; Sadayappan, Ponnuswamy

    2012-03-01

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empirically measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.

  3. Modelling and optimization of a recombinant BHK-21 cultivation process using hybrid grey-box systems.

    PubMed

    Teixeira, A; Cunha, A E; Clemente, J J; Moreira, J L; Cruz, H J; Alves, P M; Carrondo, M J T; Oliveira, R

    2005-08-22

    In this work a model-based optimization study of fed-batch BHK-21 cultures expressing the human fusion glycoprotein IgG1-IL2 was performed. It was concluded that due to the complexity of the BHK metabolism it is rather difficult to develop a kinetic model with sufficient accuracy for optimization studies. Many kinetic expressions and a large number of parameters are involved resulting in a complex identification problem. For this reason, an alternative more cost-effective methodology based on hybrid grey-box models was adopted. Several model structures combining the a priori reliable first principles knowledge with black-box models were investigated using data from batch and fed-batch experiments. It has been reported in previous studies that the BHK metabolism exhibits modulation particularities when compared to other mammalian cell lines. It was concluded that these mechanisms were effectively captured by the hybrid model, this being of crucial importance for the successful optimization of the process operation. A method was proposed to monitor the risk of hybrid model unreliability and to constraint the optimization results to acceptable risk levels. From the optimization study it was concluded that the process productivity may be considerably increased if the glutamine and glucose concentrations are maintained at low levels during the growth phase and then glutamine feeding is increased.

  4. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    NASA Astrophysics Data System (ADS)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  5. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  6. MOGO: Model-Oriented Global Optimization of Petascale Applications

    SciTech Connect

    Malony, Allen D.; Shende, Sameer S.

    2012-09-14

    The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.

  7. A Markov decision model for determining optimal outpatient scheduling.

    PubMed

    Patrick, Jonathan

    2012-06-01

    Managing an efficient outpatient clinic can often be complicated by significant no-show rates and escalating appointment lead times. One method that has been proposed for avoiding the wasted capacity due to no-shows is called open or advanced access. The essence of open access is "do today's demand today". We develop a Markov Decision Process (MDP) model that demonstrates that a short booking window does significantly better than open access. We analyze a number of scenarios that explore the trade-off between patient-related measures (lead times) and physician- or system-related measures (revenue, overtime and idle time). Through simulation, we demonstrate that, over a wide variety of potential scenarios and clinics, the MDP policy does as well or better than open access in terms of minimizing costs (or maximizing profits) as well as providing more consistent throughput.

  8. Optimization models and techniques for implementation and pricing of electricity markets

    NASA Astrophysics Data System (ADS)

    Madrigal Martinez, Marcelino

    Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and

  9. Optimization of the selective frequency damping parameters using model reduction

    NASA Astrophysics Data System (ADS)

    Cunha, Guilherme; Passaggia, Pierre-Yves; Lazareff, Marc

    2015-09-01

    In the present work, an optimization methodology to compute the best control parameters, χ and Δ, for the selective frequency damping method is presented. The optimization does not suppose any a priori knowledge of the flow physics, neither of the underlying numerical methods, and is especially suited for simulations requiring large quantity of grid elements and processors. It allows for obtaining an optimal convergence rate to a steady state of the damped Navier-Stokes system. This is achieved using the Dynamic Mode Decomposition, which is a snapshot-based method, to estimate the eigenvalues associated with global unstable dynamics. Validations test cases are presented for the numerical configurations of a laminar flow past a 2D cylinder, a separated boundary-layer over a shallow bump, and a 3D turbulent stratified-Poiseuille flow.

  10. Modeling, analysis and optimization of cylindrical stiffened panels for reusable launch vehicle structures

    NASA Astrophysics Data System (ADS)

    Venkataraman, Satchithanandam

    The design of reusable launch vehicles is driven by the need for minimum weight structures. Preliminary design of reusable launch vehicles requires many optimizations to select among competing structural concepts. Accurate models and analysis methods are required for such structural optimizations. Model, analysis, and optimization complexities have to be compromised to meet constraints on design cycle time and computational resources. Stiffened panels used in reusable launch vehicle tanks exhibit complex buckling failure modes. Using detailed finite element models for buckling analysis is too expensive for optimization. Many approximate models and analysis methods have been developed for design of stiffened panels. This dissertation investigates the use of approximate models and analysis methods implemented in PANDA2 software for preliminary design of stiffened panels. PANDA2 is also used for a trade study to compare weight efficiencies of stiffened panel concepts for a liquid hydrogen tank of a reusable launch vehicle. Optimum weights of stiffened panels are obtained for different materials, constructions and stiffener geometry. The study investigates the influence of modeling and analysis choices in PANDA2 on optimum designs. Complex structures usually require finite element analysis models to capture the details of their response. Design of complex structures must account for failure modes that are both global and local in nature. Often, different analysis models or computer programs are employed to calculate global and local structural response. Integration of different analysis programs is cumbersome and computationally expensive. Response surface approximation provides a global polynomial approximation that filters numerical noise present in discretized analysis models. The computational costs are transferred from optimization to development of approximate models. Using this process, the analyst can create structural response models that can be used by

  11. Using genetic algorithm to solve a new multi-period stochastic optimization model

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Li; Zhang, Ke-Cun

    2009-09-01

    This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.

  12. Overview and application of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) toolbox

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  13. The Model Optimization, Uncertainty, and SEnsitivity analysis (MOUSE) toolbox: overview and application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  14. Perceived and Implicit Ranking of Academic Journals: An Optimization Choice Model

    ERIC Educational Resources Information Center

    Xie, Frank Tian; Cai, Jane Z.; Pan, Yue

    2012-01-01

    A new system of ranking academic journals is proposed in this study and optimization choice model used to analyze data collected from 346 faculty members in a business discipline. The ranking model uses the aggregation of perceived, implicit sequencing of academic journals by academicians, therefore eliminating several key shortcomings of previous…

  15. Optimal Calibration Designs for Tests of Polytomously Scored Items Described by Item Response Theory Models.

    ERIC Educational Resources Information Center

    Holman, Rebecca; Berger, Martijn P. F.

    2001-01-01

    Studied calibration designs that maximize the determinants of Fisher's information matrix on the item parameters for sets of polytomously scored items. Analyzed these items using a number of item response theory models. Results show that for the data and models used, a D-optimal calibration design for an answer or set of answers can reduce the…

  16. Can evolution provide perfectly optimal solutions for a universal model of reading?

    PubMed

    Behme, Christina

    2012-10-01

    Frost has given us good reason to question the universality of existing computational models of reading. Yet, he has not provided arguments showing that all languages share fundamental and invariant reading universals. His goal of outlining the blueprint principles for a universal model of reading is premature. Further, it is questionable whether natural evolution can provide the optimal solutions that Frost invokes.

  17. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Jean-Luc Cambier Program Officer, Computational Mathematics , AFOSR/RTA 875 N...computational tools have been inadequate. Our goal has been to develop new mathematical formulations, estimation approaches, and approximation strategies...previous suboptimal approaches. 15. SUBJECT TERMS computational mathematics ; optimal experimental design; uncertainty quantification; Bayesian inference

  18. Surrogate-based Multi-Objective Optimization and Uncertainty Quantification Methods for Large, Complex Geophysical Models

    NASA Astrophysics Data System (ADS)

    Gong, Wei; Duan, Qingyun

    2016-04-01

    Parameterization scheme has significant influence to the simulation ability of large, complex dynamic geophysical models, such as distributed hydrological models, land surface models, weather and climate models, etc. with the growing knowledge of physical processes, the dynamic geophysical models include more and more processes and producing more output variables. Consequently the parameter optimization / uncertainty quantification algorithms should also be multi-objective compatible. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this research, we have developed surrogate-based multi-objective optimization method (MO-ASMO) and Markov Chain Monte Carlo method (MC-ASMO) for uncertainty quantification for these expensive dynamic models. The aim of MO-ASMO and MC-ASMO is to reduce the total number of model runs with appropriate adaptive sampling strategy assisted by surrogate modeling. Moreover, we also developed a method that can steer the search process with the help of prior parameterization scheme derived from the physical processes involved, so that all of the objectives can be improved simultaneously. The proposed algorithms have been evaluated with test problems and a land surface model - the Common Land Model (CoLM). The results demonstrated their effectiveness and efficiency.

  19. Discussion of skill improvement in marine ecosystem dynamic models based on parameter optimization and skill assessment

    NASA Astrophysics Data System (ADS)

    Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen

    2016-07-01

    Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.

  20. A Stochastic Optimal Control Problem For Predation of Models And Mimics

    NASA Astrophysics Data System (ADS)

    Tsoularis, A.

    2007-11-01

    In Ecology, the term mimicry describes a situation in which one type of species, the mimic, shares common external features with another type of species, the model with the sole purpose of confusing potential predators. In Batesian mimicry, named after Henry Walter Bates, the English naturalist, the mimics, which are palatable to predators, send similar signals to model species, which are unpalatable. His theory of mimicry postulates that predators tend to avoid nauseous (in smell or taste) models and the mimics derive some form of protection by resembling the models. This theory carries the assumption that models are more abundant than mimics so that predators can learn to avoid them. In this work a stochastic optimal control problem for optimal predation is presented. The objective is to maximize the predator's net energetic benefit. Mimic consumption is beneficial (positive) whereas model consumption is detrimental (negative).

  1. Development of Plasma Equilibrium Response Model for Optimized Plasma Control of KSTAR tokamak

    NASA Astrophysics Data System (ADS)

    Jeon, Youngmu; Park, Jong-Kyu; Park, Young-Seok; Hwang, Y. S.

    2004-11-01

    Plasma equilibrium response models for an optimized control system design are developed with KSTAR tokamak configurations. In a simple filament model, plasma column is assumed as a single ring filament with rigid displacements, and constitutes circuits with external conductors (coils, passive plate, and vacuum vessel segments). Perturbed equilibrium response model, based on CREATE-L deformable plasma response model [1], assumes that the plasma evolves through a sequence of MHD equilibria. Prediction characteristics of both models are described in terms of open loop characteristics of vertical motion of plasma, and validated by comparison with TSC (Tokamak Simulation Code) simulations. Additionally, applications of the plasma equilibrium response models to design of optimal plasma controllers are described. [1] R. Albanese, and F. Villone, Nucl. Fusion 38 723 (1998)

  2. Model-based optimal design of polymer-coated chemical sensors.

    PubMed

    Phillips, Cynthia; Jakusch, Michael; Steiner, Hannes; Mizaikoff, Boris; Fedorov, Andrei G

    2003-03-01

    A model-based methodology for optimal design of polymer-coated chemical sensors is developed and is illustrated for the example of infrared evanescent field chemical sensors. The methodology is based on rigorous and computationally efficient modeling of combined fluid mechanics and mass transfer, including transport of multiple analytes. A simple algebraic equation for the optimal size of the sensor flow cell is developed to guide sensor design and validated by extensive CFD simulations. Based upon these calculations, optimized geometries of the sensor flow cell are proposed to further improve the response time of chemical sensors.

  3. Modeling the spread of bed bug infestation and optimal resource allocation for disinfestation.

    PubMed

    Gharouni, Ali; Wang, Lin

    2016-10-01

    A patch-structured multigroup-like $SIS$ epidemiological model is proposed to study the spread of the common bed bug infestation. It is shown that the model exhibits global threshold dynamics with the basic reproduction number as the threshold parameter. Costs associated with the disinfestation process are incorporated into setting up the optimization problems. Procedures are proposed and simulated for finding optimal resource allocation strategies to achieve the infestation free state. Our analysis and simulations provide useful insights on how to efficiently distribute the available exterminators among the infested patches for optimal disinfestation management.

  4. Optimization of absorption placement using geometrical acoustic models and least squares.

    PubMed

    Saksela, Kai; Botts, Jonathan; Savioja, Lauri

    2015-04-01

    Given a geometrical model of a space, the problem of optimally placing absorption in a space to match a desired impulse response is in general nonlinear. This has led some to use costly optimization procedures. This letter reformulates absorption assignment as a constrained linear least-squares problem. Regularized solutions result in direct distribution of absorption in the room and can accommodate multiple frequency bands, multiple sources and receivers, and constraints on geometrical placement of absorption. The method is demonstrated using a beam tracing model, resulting in the optimal absorption placement on the walls and ceiling of a classroom.

  5. Determining the optimal planting density and land expectation value -- a numerical evaluation of decision model

    SciTech Connect

    Gong, P. . Dept. of Forest Economics)

    1998-08-01

    Different decision models can be constructed and used to analyze a regeneration decision in even-aged stand management. However, the optimal decision and management outcomes determined in an analysis may depend on the decision model used in the analysis. This paper examines the proper choice of decision model for determining the optimal planting density and land expectation value (LEV) for a Scots pine (Pinus sylvestris L.) plantation in northern Sweden. First, a general adaptive decision model for determining the regeneration alternative that maximizes the LEV is presented. This model recognizes future stand state and timber price uncertainties by including multiple stand state and timber price scenarios, and assumes that the harvest decision in each future period will be made conditional on the observed stand state and timber prices. Alternative assumptions about future stand states, timber prices, and harvest decisions can be incorporated into this general decision model, resulting in several different decision models that can be used to analyze a specific regeneration problem. Next, the consequences of choosing different modeling assumptions are determined using the example Scots pine plantation problem. Numerical results show that the most important sources of uncertainty that affect the optimal planting density and LEV are variations of the optimal clearcut time due to short-term fluctuations of timber prices. It is appropriate to determine the optimal planting density and harvest policy using an adaptive decision model that recognizes uncertainty only in future timber prices. After the optimal decisions have been found, however, the LEV should be re-estimated by incorporating both future stand state and timber price uncertainties.

  6. Multi-objective optimization of gear forging process based on adaptive surrogate meta-models

    NASA Astrophysics Data System (ADS)

    Meng, Fanjuan; Labergere, Carl; Lafon, Pascal; Daniel, Laurent

    2013-05-01

    In forging industry, net shape or near net shape forging of gears has been the subject of considerable research effort in the last few decades. So in this paper, a multi-objective optimization methodology of net shape gear forging process design has been discussed. The study is mainly done in four parts: building parametric CAD geometry model, simulating the forging process, fitting surrogate meta-models and optimizing the process by using an advanced algorithm. In order to maximally appropriate meta-models of the real response, an adaptive meta-model based design strategy has been applied. This is a continuous process: first, bui Id a preliminary version of the meta-models after the initial simulated calculations; second, improve the accuracy and update the meta-models by adding some new representative samplings. By using this iterative strategy, the number of the initial sample points for real numerical simulations is greatly decreased and the time for the forged gear design is significantly shortened. Finally, an optimal design for an industrial application of a 27-teeth gear forging process was introduced, which includes three optimization variables and two objective functions. A 3D FE nu merical simulation model is used to realize the process and an advanced thermo-elasto-visco-plastic constitutive equation is considered to represent the material behavior. The meta-model applied for this example is kriging and the optimization algorithm is NSGA-II. At last, a relatively better Pareto optimal front (POF) is gotten with gradually improving the obtained surrogate meta-models.

  7. Optimal parameter and uncertainty estimation of a land surface model: Sensitivity to parameter ranges and model complexities

    NASA Astrophysics Data System (ADS)

    Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  8. Optimization of Evaporative Demand Models for Seasonal Drought Forecasting

    NASA Astrophysics Data System (ADS)

    McEvoy, D.; Huntington, J. L.; Hobbins, M.

    2015-12-01

    Providing reliable seasonal drought forecasts continues to pose a major challenge for scientists, end-users, and the water resources and agricultural communities. Precipitation (Prcp) forecasts beyond weather time scales are largely unreliable, so exploring new avenues to improve seasonal drought prediction is necessary to move towards applications and decision-making based on seasonal forecasts. A recent study has shown that evaporative demand (E0) anomaly forecasts from the Climate Forecast System Version 2 (CFSv2) are consistently more skillful than Prcp anomaly forecasts during drought events over CONUS, and E0 drought forecasts may be particularly useful during the growing season in the farming belts of the central and Midwestern CONUS. For this recent study, we used CFSv2 reforecasts to assess the skill of E0 and of its individual drivers (temperature, humidity, wind speed, and solar radiation), using the American Society for Civil Engineers Standardized Reference Evapotranspiration (ET0) Equation. Moderate skill was found in ET0, temperature, and humidity, with lesser skill in solar radiation, and no skill in wind. Therefore, forecasts of E0 based on models with no wind or solar radiation inputs may prove to be more skillful than the ASCE ET0. For this presentation we evaluate CFSv2 E0 reforecasts (1982-2009) from three different E0 models: (1) ASCE ET0; (2) Hargreaves and Samani (ET-HS), which is estimated from maximum and minimum temperature alone; and (3) Valiantzas (ET-V), which is a modified version of the Penman method for use when wind speed data are not available (or of poor quality) and is driven only by temperature, humidity, and solar radiation. The University of Idaho's gridded meteorological data (METDATA) were used as observations to evaluate CFSv2 and also to determine if ET0, ET-HS, and ET-V identify similar historical drought periods. We focus specifically on CFSv2 lead times of one, two, and three months, and season one forecasts; which are

  9. Wind Tunnel Management and Resource Optimization: A Systems Modeling Approach

    NASA Technical Reports Server (NTRS)

    Jacobs, Derya, A.; Aasen, Curtis A.

    2000-01-01

    Time, money, and, personnel are becoming increasingly scarce resources within government agencies due to a reduction in funding and the desire to demonstrate responsible economic efficiency. The ability of an organization to plan and schedule resources effectively can provide the necessary leverage to improve productivity, provide continuous support to all projects, and insure flexibility in a rapidly changing environment. Without adequate internal controls the organization is forced to rely on external support, waste precious resources, and risk an inefficient response to change. Management systems must be developed and applied that strive to maximize the utility of existing resources in order to achieve the goal of "faster, cheaper, better". An area of concern within NASA Langley Research Center was the scheduling, planning, and resource management of the Wind Tunnel Enterprise operations. Nine wind tunnels make up the Enterprise. Prior to this research, these wind tunnel groups did not employ a rigorous or standardized management planning system. In addition, each wind tunnel unit operated from a position of autonomy, with little coordination of clients, resources, or project control. For operating and planning purposes, each wind tunnel operating unit must balance inputs from a variety of sources. Although each unit is managed by individual Facility Operations groups, other stakeholders influence wind tunnel operations. These groups include, for example, the various researchers and clients who use the facility, the Facility System Engineering Division (FSED) tasked with wind tunnel repair and upgrade, the Langley Research Center (LaRC) Fabrication (FAB) group which fabricates repair parts and provides test model upkeep, the NASA and LARC Strategic Plans, and unscheduled use of the facilities by important clients. Expanding these influences horizontally through nine wind tunnel operations and vertically along the NASA management structure greatly increases the

  10. Functional and Structural Optimality in Plant Growth: A Crop Modelling Case Study

    NASA Astrophysics Data System (ADS)

    Caldararu, S.; Purves, D. W.; Smith, M. J.

    2014-12-01

    Simple mechanistic models of vegetation processes are essential both to our understanding of plant behaviour and to our ability to predict future changes in vegetation. One concept that can take us closer to such models is that of plant optimality, the hypothesis that plants aim to achieve an optimal state. Conceptually, plant optimality can be either structural or functional optimality. A structural constraint would mean that plants aim to achieve a certain structural characteristic such as an allometric relationship or nutrient content that allows optimal function. A functional condition refers to plants achieving optimal functionality, in most cases by maximising carbon gain. Functional optimality conditions are applied on shorter time scales and lead to higher plasticity, making plants more adaptable to changes in their environment. In contrast, structural constraints are optimal given the specific environmental conditions that plants are adapted to and offer less flexibility. We exemplify these concepts using a simple model of crop growth. The model represents annual cycles of growth from sowing date to harvest, including both vegetative and reproductive growth and phenology. Structural constraints to growth are represented as an optimal C:N ratio in all plant organs, which drives allocation throughout the vegetative growing stage. Reproductive phenology - i.e. the onset of flowering and grain filling - is determined by a functional optimality condition in the form of maximising final seed mass, so that vegetative growth stops when the plant reaches maximum nitrogen or carbon uptake. We investigate the plants' response to variations in environmental conditions within these two optimality constraints and show that final yield is most affected by changes during vegetative growth which affect the structural constraint.

  11. The human operator in manual preview tracking /an experiment and its modeling via optimal control/

    NASA Technical Reports Server (NTRS)

    Tomizuka, M.; Whitney, D. E.

    1976-01-01

    A manual preview tracking experiment and its results are presented. The preview drastically improves the tracking performance compared to zero-preview tracking. Optimal discrete finite preview control is applied to determine the structure of a mathematical model of the manual preview tracking experiment. Variable parameters in the model are adjusted to values which are consistent to the published data in manual control. The model with the adjusted parameters is found to be well correlated to the experimental results.

  12. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    SciTech Connect

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  13. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D-, A- and E-optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D-optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  14. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  15. Modeling and cavity optimization of an external cavity semiconductor laser

    NASA Astrophysics Data System (ADS)

    Feies, Valentin I.; Montrosset, Ivo

    2004-09-01

    Semiconductor external cavity lasers (ECL) have a wide range of applications in the field of DWDM and measurement systems. One of their most important features is the continuous tuning without mode hopping in a wide wavelength range. In this paper we present a modelling approach for an ECL in Littman-Metcalf configuration carried out for optimising: 1) the laser diode position inside the cavity in order to maximize the range of continuous wavelength tuning without mode hopping and without cavity-length adjustment and 2) the choice of the detuning of the operating wavelength respect to the Bragg condition in order to minimize the four-wave mixing (FWM) effects and the effect of a non-perfect antireflection coating (ARC). A realistic example has been analyzed and therefore we considered: the wavelength dependence of the modal gain, linewidth enhancement factor and grating selectivity, as well as the modal refractive index change with carrier injection, operating wavelength and temperature. The implemented numerical tools allow also to obtain some specifications on the grating selectivity and the ARC design.

  16. Applying optimal model selection in principal stratification for causal inference.

    PubMed

    Odondi, Lang'o; McNamee, Roseanne

    2013-05-20

    Noncompliance to treatment allocation is a key source of complication for causal inference. Efficacy estimation is likely to be compounded by the presence of noncompliance in both treatment arms of clinical trials where the intention-to-treat estimate provides a biased estimator for the true causal estimate even under homogeneous treatment effects assumption. Principal stratification method has been developed to address such posttreatment complications. The present work extends a principal stratification method that adjusts for noncompliance in two-treatment arms trials by developing model selection for covariates predicting compliance to treatment in each arm. We apply the method to analyse data from the Esprit study, which was conducted to ascertain whether unopposed oestrogen (hormone replacement therapy) reduced the risk of further cardiac events in postmenopausal women who survive a first myocardial infarction. We adjust for noncompliance in both treatment arms under a Bayesian framework to produce causal risk ratio estimates for each principal stratum. For mild values of a sensitivity parameter and using separate predictors of compliance in each arm, principal stratification results suggested that compliance with hormone replacement therapy only would reduce the risk for death and myocardial reinfarction by about 47% and 25%, respectively, whereas compliance with either treatment would reduce the risk for death by 13% and reinfarction by 60% among the most compliant. However, the results were sensitive to the user-defined sensitivity parameter.

  17. An optimal control model approach to the design of compensators for simulator delay

    NASA Technical Reports Server (NTRS)

    Baron, S.; Lancraft, R.; Caglayan, A.

    1982-01-01

    The effects of display delay on pilot performance and workload and of the design of the filters to ameliorate these effects were investigated. The optimal control model for pilot/vehicle analysis was used both to determine the potential delay effects and to design the compensators. The model was applied to a simple roll tracking task and to a complex hover task. The results confirm that even small delays can degrade performance and impose a workload penalty. A time-domain compensator designed by using the optimal control model directly appears capable of providing extensive compensation for these effects even in multi-input, multi-output problems.

  18. A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation

    SciTech Connect

    Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin

    2016-01-01

    This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.

  19. Discrete-time dynamic user-optimal departure time/route choice model

    SciTech Connect

    Chen, H.K.; Hsueh, C.F.

    1998-05-01

    This paper concerns a discrete-time, link-based, dynamic user-optimal departure time/route choice model using the variational inequality approach. The model complies with a dynamic user-optimal equilibrium condition in which for each origin-destination pair, the actual route travel times experienced by travelers, regardless the departure time, is equal and minimal. A nested diagonalization procedure is proposed to solve the model. Numerical examples are then provided for demonstration and detailed elaboration for multiple solutions and Braess`s paradox.

  20. Algorithms of D-optimal designs for Morgan Mercer Flodin (MMF) models with three parameters

    NASA Astrophysics Data System (ADS)

    Widiharih, Tatik; Haryatmi, Sri; Gunardi, Wilandari, Yuciana

    2016-02-01

    Morgan Mercer Flodin (MMF) model is used in many areas including biological growth studies, animal and husbandry, chemistry, finance, pharmacokinetics and pharmacodynamics. Locally D-optimal designs for Morgan Mercer Flodin (MMF) models with three parameters are investigated. We used the Generalized Equivalence Theorem of Kiefer and Wolvowitz to determine D-optimality criteria. Number of roots for standardized variance are determined using Tchebysheff system concept and it is used to decide that the design is minimally supported design. In these models, designs are minimally supported designs with uniform weight on its support, and the upper bound of the design region is a support point.

  1. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  2. Optimized electrocaloric effect by field reversal: Analytical model

    NASA Astrophysics Data System (ADS)

    Ma, Yang-Bin; Novak, Nikola; Albe, Karsten; Xu, Bai-Xiang

    2016-11-01

    Applying a negative field on a positively poled ferroelectric sample can enhance the electrocaloric cooling and is a promising method to optimize the electrocaloric cycle. Experimental measurements show that the maximal cooling is not obtained, when the electric field is removed, but reversed to a value corresponding to the shoulder of the P-E loop. This phenomenon cannot be explained if a constant total entropy is assumed under adiabatic conditions. Thus, a direct analysis of entropy changes based on work loss is proposed in this work, which takes the entropy contribution of the irreversible process into account. The optimal reversed field determined by this approach agrees with the experimental observations. This study signifies the importance of considering irreversible process in the electrocaloric cycles.

  3. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  4. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  5. Tri-Level Optimization Models to Defend Critical Infrastructure

    DTIC Science & Technology

    2007-09-01

    except those mentioned above, are randomly, uniformly distributed on [0,1] and [1,2] respectively. 48 The XPRESS solver is used within GAMS ( XPRESS ...for LPs and MIPs: XPRESS (v. 16.10) • Absolute and relative termination criterion for MIP: 0.0 • Allowable relative gap between bounds in the...the problem. After 10 hours of execution, the lower bound is still only 0.58 (and not close to proving optimality because the global upper bound is

  6. Convex Optimization Methods for Graphs and Statistical Modeling

    DTIC Science & Technology

    2011-06-01

    extraneous polylogarithmic factors. In the next section we describe a new mechanism for estimating Gaussian widths, which provides near-optimal guarantees...so-called Quadratic Assignment Problem (QAP) [32]. Solving QAP is hard in general, because it includes as a special case the Hamiltonian cycle problem...only if the graph contains a Hamiltonian cycle. However there are well-studied spectral and semidefinite relaxations for QAP, which we discuss next

  7. Improving flash flood forecasting with distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yangbo

    2016-04-01

    In China, flash food is usually regarded as flood occured in small and medium sized watersheds with drainage area less than 200 km2, and is mainly induced by heavy rains, and occurs in where hydrological observation is lacked. Flash flood is widely observed in China, and is the flood causing the most casualties nowadays in China. Due to hydrological data scarcity, lumped hydrological model is difficult to be employed for flash flood forecasting which requires lots of observed hydrological data to calibrate model parameters. Physically based distributed hydrological model discrete the terrain of the whole watershed into a number of grid cells at fine resolution, assimilate different terrain data and precipitation to different cells, and derive model parameteris from the terrain properties, thus having the potential to be used in flash flood forecasting and improving flash flood prediction capability. In this study, the Liuxihe Model, a physically based distributed hydrological model mainly proposed for watershed flood forecasting is employed to simulate flash floods in the Ganzhou area in southeast China, and models have been set up in 5 watersheds. Model parameters have been derived from the terrain properties including the DEM, the soil type and land use type, but the result shows that the flood simulation uncertainty is high, which may be caused by parameter uncertainty, and some kind of uncertainty control is needed before the model could be used in real-time flash flood forecastin. Considering currently many Chinese small and medium sized watersheds has set up hydrological observation network, and a few flood events could be collected, it may be used for model parameter optimization. For this reason, an automatic model parameter optimization algorithm using Particle Swam Optimization(PSO) is developed to optimize the model parameters, and it has been found that model parameters optimized even only with one observed flood events could largely reduce the flood

  8. Construction of high-resolution stochastic geological models and optimal upscaling to a simplified layer-type hydrogeological model

    NASA Astrophysics Data System (ADS)

    Quental, Paulo; Almeida, José António; Simões, Manuela

    2012-04-01

    Despite inequalities in spatial resolution between stochastic geological models and flow simulator models, geostatistical algorithms are used for the characterisation of groundwater systems. From available data to grid-block hydraulic parameters, workflows basically utilise the development of a detailed geostatistical model (morphology and properties) followed by upscaling. This work aims to design and test a two-step methodology encompassing the generation of a high-resolution 3D stochastic geological model and simplification into a low-resolution groundwater layer-type model. First, a high-resolution 3D stochastic model of rock types or hydrofacies (sets of rock types with similar hydraulic characteristics) is generated using an enhanced version of the sequential indicator simulation (SIS) with corrections for local probabilities and for two- and three-point template statistics. In a second step, the high-resolution geological model provided by SIS is optimally simplified into a small set of layers according to a supervised simulated annealing (SA) optimisation procedure and at the end equivalent hydraulic properties are upscaled. Two outcomes are provided by this methodology: (1) a regular 2D mesh of the top and bottom limits of each hydrogeological unit or layer from a conceptual model and (2), for each layer, a 2D grid-block of equivalent hydraulic parameters prepared to be inputted into an aquifer simulator. This methodology was tested for the upper aquifer area of SPEL (Sociedade Portuguesa de Explosivos), an explosives deactivation plant in Seixal municipality, Portugal.

  9. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  10. A system-level cost-of-energy wind farm layout optimization with landowner modeling

    SciTech Connect

    Chen, Le; MacDonald, Erin

    2013-10-01

    This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under two land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.

  11. Review of Integrated Modeling and Optimization Software for Advance Concepts Branch Models

    DTIC Science & Technology

    2013-04-01

    ignore any feature of a model that allows for parametric and Monte Carlo runs since this will be controlled by the analysis phase of the IMO...TECHNICAL (PDF) INFORMATION CTR DTIC OCA 8725 JOHN J KINGMAN RD STE 0944 FORT BELVOIR VA 22060-6218 1 DIRECTOR (PDF) US ARMY RESEARCH LAB

  12. Finite Element Model Optimization of the FalconSAT-5 Structural Engineering Model

    DTIC Science & Technology

    2009-03-01

    low-level sine sweep was performed to measure structural natural frequencies. Random vibration and sine burst tests were performed to validate the...3 FE Model Results . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.8 SEM I Vibrometer Test ...29 2.9 SEM II Shaker Vibration Test . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 III. Method

  13. Modeling and optimization of laser beam percussion drilling of thin aluminum sheet

    NASA Astrophysics Data System (ADS)

    Mishra, Sanjay; Yadava, Vinod

    2013-06-01

    Modeling and optimization of machining processes using coupled methodology has been an area of interest for manufacturing engineers in recent times. The present paper deals with the development of a prediction model for Laser Beam Percussion Drilling (LBPD) using the coupled methodology of Finite Element Method (FEM) and Artificial Neural Network (ANN). First, 2D axisymmetric FEM based thermal models for LBPD have been developed, incorporating the temperature-dependent thermal properties, optical properties, and phase change phenomena of aluminum. The model is validated after comparing the results obtained using the FEM model with self-conducted experimental results in terms of hole taper. Secondly, sufficient input and output data generated using the FEM model is used for the training and testing of the ANN model. Further, Grey Relational Analysis (GRA) coupled with Principal Component Analysis (PCA) has been effectively used for the multi-objective optimization of the LBPD process using data predicted by the trained ANN model. The developed ANN model predicts that hole taper and material removal rates are highly affected by pulse width, whereas the pulse frequency plays the most significant role in determining the extent of HAZ. The optimal process parameter setting shows a reduction of hole taper by 67.5%, increase of material removal rate by 605%, and reduction of extent of HAZ by 3.24%.

  14. Optimal harvesting of prey-predator system with interval biological parameters: a bioeconomic model.

    PubMed

    Pal, D; Mahaptra, G S; Samanta, G P

    2013-02-01

    The paper presents the study of one prey one predator harvesting model with imprecise biological parameters. Due to the lack of precise numerical information of the biological parameters such as prey population growth rate, predator population decay rate and predation coefficients, we consider the model with imprecise data as form of an interval in nature. Many authors have studied prey-predator harvesting model in different form, here we consider a simple prey-predator model under impreciseness and introduce parametric functional form of an interval and then study the model. We identify the equilibrium points of the model and discuss their stabilities. The existence of bionomic equilibrium of the model is discussed. We study the optimal harvest policy and obtain the solution in the interior equilibrium using Pontryagin's maximum principle. Numerical examples are presented to support the proposed model.

  15. a Revised Stochastic Optimal Velocity Model Considering the Velocity Gap with a Preceding Vehicle

    NASA Astrophysics Data System (ADS)

    Shigaki, Keizo; Tanimoto, Jun; Hagishima, Aya

    The stochastic optimal velocity (SOV) model, which is a cellular automata model, has been widely used because of its good reproducibility of the fundamental diagram, despite its simplicity. However, it has a drawback: in SOV, a vehicle that is temporarily stopped takes a long time to restart. This study proposes a revised SOV model that suppresses this particular defect; the basic concept of this model is derived from the car-following model, which considers the velocity gap between a particular vehicle and the preceding vehicle. A series of simulations identifies the model parameters and clarifies that the proposed model can reproduce the three traffic phases: free, jam, and even synchronized phases, which cannot be achieved by the conventional SOV model.

  16. Modelling of Microalgae Culture Systems with Applications to Control and Optimization.

    PubMed

    Bernard, Olivier; Mairet, Francis; Chachuat, Benoît

    2016-01-01

    Mathematical modeling is becoming ever more important to assess the potential, guide the design, and enable the efficient operation and control of industrial-scale microalgae culture systems (MCS). The development of overall, inherently multiphysics, models involves coupling separate submodels of (i) the intrinsic biological properties, including growth, decay, and biosynthesis as well as the effect of light and temperature on these processes, and (ii) the physical properties, such as the hydrodynamics, light attenuation, and temperature in the culture medium. When considering high-density microalgae culture, in particular, the coupling between biology and physics becomes critical. This chapter reviews existing models, with a particular focus on the Droop model, which is a precursor model, and it highlights the structure common to many microalgae growth models. It summarizes the main developments and difficulties towards multiphysics models of MCS as well as applications of these models for monitoring, control, and optimization purposes.

  17. Automated Optimization of Water–Water Interaction Parameters for a Coarse-Grained Model

    PubMed Central

    2015-01-01

    We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder–Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506

  18. Interfacing MATLAB and Python Optimizers to Black-Box Environmental Simulation Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Leung, K.; Tolson, B.

    2009-12-01

    A common approach for utilizing environmental models in a management or policy-analysis context is to incorporate them into a simulation-optimization framework - where an underlying process-based environmental model is linked with an optimization search algorithm. The optimization search algorithm iteratively adjusts various model inputs (i.e. parameters or design variables) in order to minimize an application-specific objective function computed on the basis of model outputs (i.e. response variables). Numerous optimization algorithms have been applied to the simulation-optimization of environmental systems and this research investigated the use of optimization libraries and toolboxes that are readily available in MATLAB and Python - two popular high-level programming languages. Inspired by model-independent calibration codes (e.g. PEST and UCODE), a small piece of interface software (known as PIGEON) was developed. PIGEON allows users to interface Python and MATLAB optimizers with arbitrary black-box environmental models without writing any additional interface code. An initial set of benchmark tests (involving more than 20 MATLAB and Python optimization algorithms) were performed to validate the interface software - results highlight the need to carefully consider such issues as numerical precision in output files and enforcement (or not) of parameter limits. Additional benchmark testing considered the problem of fitting isotherm expressions to laboratory data - with an emphasis on dual-mode expressions combining non-linear isotherms with a linear partitioning component. With respect to the selected isotherm fitting problems, derivative-free search algorithms significantly outperformed gradient-based algorithms. Attempts to improve gradient-based performance, via parameter tuning and also via several alternative multi-start approaches, were largely unsuccessful.

  19. Multi Objective Optimization for Calibration and Efficient Uncertainty Analysis of Computationally Expensive Watershed Models

    NASA Astrophysics Data System (ADS)

    Akhtar, T.; Shoemaker, C. A.

    2011-12-01

    Assessing the sensitivity of calibration results to different calibration criteria can be done through multi objective optimization that considers multiple calibration criteria. This analysis can be extended to uncertainty analysis by comparing the results of simulation of the model with parameter sets from many points along a Pareto Front. In this study we employ multi-objective optimization in order to understand which parameter values should be used for flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville Reservoir in upstate New York. The comprehensive analysis procedure encapsulates identification of suitable objectives, analysis of trade-offs obtained through multi-objective optimization, and the impact of the trade-offs uncertainty. Examples of multiple criteria can include a) quality of the fit in different seasons, b) quality of the fit for high flow events and for low flow events, c) quality of the fit for different constituents (e.g. water versus nutrients). Many distributed watershed models are computationally expensive and include a large number of parameters that are to be calibrated. Efficient optimization algorithms are hence needed to find good solutions to multi-criteria calibration problems in a feasible amount of time. We apply a new algorithm called Gap Optimized Multi-Objective Optimization using Response Surfaces (GOMORS), for efficient multi-criteria optimization of the Cannonsville SWAT watershed calibration problem. GOMORS is a stochastic optimization method, which makes use of Radial Basis Functions for approximation of the computationally expensive objectives. GOMORS performance is also compared against other multi-objective algorithms ParEGO and NSGA-II. ParEGO is a kriging based efficient multi-objective optimization algorithm, whereas NSGA-II is a well-known multi-objective evolutionary optimization algorithm. GOMORS is more efficient than both ParEGO and NSGA-II in providing

  20. Numerical computation of the optimal vector field: Exemplified by a fishery model

    PubMed Central

    Grass, D.

    2012-01-01

    Numerous optimal control models analyzed in economics are formulated as discounted infinite time horizon problems, where the defining functions are nonlinear as well in the states as in the controls. As a consequence solutions can often only be found numerically. Moreover, the long run optimal solutions are mostly limit sets like equilibria or limit cycles. Using these specific solutions a BVP approach together with a continuation technique is used to calculate the parameter dependent dynamic structure of the optimal vector field. We use a one dimensional optimal control model of a fishery to exemplify the numerical techniques. But these methods are applicable to a much wider class of optimal control problems with a moderate number of state and control variables. PMID:25505805

  1. Computational wing optimization and comparisons with experiment for a semi-span wing model

    NASA Technical Reports Server (NTRS)

    Waggoner, E. G.; Haney, H. P.; Ballhaus, W. F.

    1978-01-01

    A computational wing optimization procedure was developed and verified by an experimental investigation of a semi-span variable camber wing model in the NASA Ames Research Center 14 foot transonic wind tunnel. The Bailey-Ballhaus transonic potential flow analysis and Woodward-Carmichael linear theory codes were linked to Vanderplaats constrained minimization routine to optimize model configurations at several subsonic and transonic design points. The 35 deg swept wing is characterized by multi-segmented leading and trailing edge flaps whose hinge lines are swept relative to the leading and trailing edges of the wing. By varying deflection angles of the flap segments, camber and twist distribution can be optimized for different design conditions. Results indicate that numerical optimization can be both an effective and efficient design tool. The optimized configurations had as good or better lift to drag ratios at the design points as the best designs previously tested during an extensive parametric study.

  2. Dimension reduction of decision variables for multireservoir operation: A spectral optimization model

    NASA Astrophysics Data System (ADS)

    Chen, Duan; Leon, Arturo S.; Gibson, Nathan L.; Hosseini, Parnian

    2016-01-01

    Optimizing the operation of a multireservoir system is challenging due to the high dimension of the decision variables that lead to a large and complex search space. A spectral optimization model (SOM), which transforms the decision variables from time domain to frequency domain, is proposed to reduce the dimensionality. The SOM couples a spectral dimensionality-reduction method called Karhunen-Loeve (KL) expansion within the routine of Nondominated Sorting Genetic Algorithm (NSGA-II). The KL expansion is used to represent the decision variables as a series of terms that are deterministic orthogonal functions with undetermined coefficients. The KL expansion can be truncated into fewer significant terms, and consequently, fewer coefficients by a predetermined number. During optimization, operators of the NSGA-II (e.g., crossover) are conducted only on the coefficients of the KL expansion rather than the large number of decision variables, significantly reducing the search space. The SOM is applied to the short-term operation of a 10-reservoir system in the Columbia River of the United States. Two scenarios are considered herein, the first with 140 decision variables and the second with 3360 decision variables. The hypervolume index is used to evaluate the optimization performance in terms of convergence and diversity. The evaluation of optimization performance is conducted for both conventional optimization model (i.e., NSGA-II without KL) and the SOM with different number of KL terms. The results show that the number of decision variables can be greatly reduced in the SOM to achieve a similar or better performance compared to the conventional optimization model. For the scenario with 140 decision variables, the optimal performance of the SOM model is found with six KL terms. For the scenario with 3360 decision variables, the optimal performance of the SOM model is obtained with 11 KL terms.

  3. Reduced-Order Model for Dynamic Optimization of Pressure Swing Adsorption

    SciTech Connect

    Agarwal, Anshul; Biegler, L.T.; Zitney, S.E.

    2007-11-01

    The last few decades have seen a considerable increase in the applications of adsorptive gas separation technologies, such as pressure swing adsorption (PSA). From an economic and environmental point of view, hydrogen separation and carbon dioxide capture from flue gas streams are the most promising applications of PSA. With extensive industrial applications, there is a significant interest for an efficient modeling, simulation, and optimization strategy. However, the design and optimization of the PSA processes have largely remained an experimental effort because of the complex nature of the mathematical models describing practical PSA processes. The separation processes are based on solid-gas equilibrium and operate under periodic transient conditions. Models for PSA processes are therefore multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together and high nonlinearities arising from non-isothermal effects. The computational effort required to solve such systems is usually quite expensive and prohibitively time consuming. Besides this, stringent product specifications, required by many industrial processes, often lead to convergence failures of the optimizers. The solution of this coupled stiff PDE system is governed by steep concentrations and temperature fronts moving with time. As a result, the optimization of such systems for either design or operation represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Sophisticated optimization strategies have been developed and applied to PSA systems with significant improvement in the performance of the process. However, most of these approaches have been quite time consuming. This gives a strong motivation to develop cost-efficient and robust optimization strategies for PSA processes. Moreover, in case of flowsheet

  4. Development of a multi-objective optimization algorithm using surrogate models for coastal aquifer management

    NASA Astrophysics Data System (ADS)

    Kourakos, George; Mantoglou, Aristotelis

    2013-02-01

    SummaryThe demand for fresh water in coastal areas and islands can be very high due to increased local needs and tourism. A multi-objective optimization methodology is developed, involving minimization of economic and environmental costs while satisfying water demand. The methodology considers desalinization of pumped water and injection of treated water into the aquifer. Variable density aquifer models are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi-objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNNs)]. The surrogate models are trained adaptively during optimization based on a genetic algorithm. In the crossover step, each pair of parents generates a pool of offspring which are evaluated using the fast surrogate model. Then, the most promising offspring are evaluated using the exact numerical model. This procedure eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. The method has important advancements compared to previous methods such as precise evaluation of the Pareto set and alleviation of propagation of errors due to surrogate model approximations. The method is applied to an aquifer in the Greek island of Santorini. The results show that the new MOSA(MNN) algorithm offers significant reduction in computational time compared to previous methods (in the case study it requires only 5% of the time required by other methods). Further, the Pareto solution is better than the solution obtained by alternative algorithms.

  5. Optimization of an electromagnetic linear actuator using a network and a finite element model

    NASA Astrophysics Data System (ADS)

    Neubert, Holger; Kamusella, Alfred; Lienig, Jens

    2011-03-01

    Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.

  6. Optimization of a Two-Fluid Hydrodynamic Model of Churn-Turbulent Flow

    SciTech Connect

    Donna Post Guillen

    2009-07-01

    A hydrodynamic model of two-phase, churn-turbulent flows is being developed using the computational multiphase fluid dynamics (CMFD) code, NPHASE-CMFD. The numerical solutions obtained by this model are compared with experimental data obtained at the TOPFLOW facility of the Institute of Safety Research at the Forschungszentrum Dresden-Rossendorf. The TOPFLOW data is a high quality experimental database of upward, co-current air-water flows in a vertical pipe suitable for validation of computational fluid dynamics (CFD) codes. A five-field CMFD model was developed for the continuous liquid phase and four bubble size groups using mechanistic closure models for the ensemble-averaged Navier-Stokes equations. Mechanistic models for the drag and non-drag interfacial forces are implemented to include the governing physics to describe the hydrodynamic forces controlling the gas distribution. The closure models provide the functional form of the interfacial forces, with user defined coefficients to adjust the force magnitude. An optimization strategy was devised for these coefficients using commercial design optimization software. This paper demonstrates an approach to optimizing CMFD model parameters using a design optimization approach. Computed radial void fraction profiles predicted by the NPHASE-CMFD code are compared to experimental data for four bubble size groups.

  7. Optimization of cascade-resilient electrical infrastructures and its validation by power flow modeling.

    PubMed

    Fang, Yiping; Pedroni, Nicola; Zio, Enrico

    2015-04-01

    Large-scale outages on real-world critical infrastructures, although infrequent, are increasingly disastrous to our society. In this article, we are primarily concerned with power transmission networks and we consider the problem of allocation of generation to distributors by rewiring links under the objectives of maximizing network resilience to cascading failure and minimizing investment costs. The combinatorial multiobjective optimization is carried out by a nondominated sorting binary differential evolution (NSBDE) algorithm. For each generators-distributors connection pattern considered in the NSBDE search, a computationally cheap, topological model of failure cascading in a complex network (named the Motter-Lai [ML] model) is used to simulate and quantify network resilience to cascading failures initiated by targeted attacks. The results on the 400 kV French power transmission network case study show that the proposed method allows us to identify optimal patterns of generators-distributors connection that improve cascading resilience at an acceptable cost. To verify the realistic character of the results obtained by the NSBDE with the embedded ML topological model, a more realistic but also more computationally expensive model of cascading failures is adopted, based on optimal power flow (namely, the ORNL-Pserc-Alaska) model). The consistent results between the two models provide impetus for the use of topological, complex network theory models for analysis and optimization of large infrastructures against cascading failure with the advantages of simplicity, scalability, and low computational cost.

  8. Geometry optimization method versus predictive ability in QSPR modeling for ionic liquids.

    PubMed

    Rybinska, Anna; Sosnowska, Anita; Barycki, Maciej; Puzyn, Tomasz

    2016-02-01

    Computational techniques, such as Quantitative Structure-Property Relationship (QSPR) modeling, are very useful in predicting physicochemical properties of various chemicals. Building QSPR models requires calculating molecular descriptors and the proper choice of the geometry optimization method, which will be dedicated to specific structure of tested compounds. Herein, we examine the influence of the ionic liquids' (ILs) geometry optimization methods on the predictive ability of QSPR models by comparing three models. The models were developed based on the same experimental data on density collected for 66 ionic liquids, but with employing molecular descriptors calculated from molecular geometries optimized at three different levels of the theory, namely: (1) semi-empirical (PM7), (2) ab initio (HF/6-311+G*) and (3) density functional theory (B3LYP/6-311+G*). The model in which the descriptors were calculated by using ab initio HF/6-311+G* method indicated the best predictivity capabilities ([Formula: see text] = 0.87). However, PM7-based model has comparable values of quality parameters ([Formula: see text] = 0.84). Obtained results indicate that semi-empirical methods (faster and less expensive regarding CPU time) can be successfully employed to geometry optimization in QSPR studies for ionic liquids.

  9. Optimized Finite-Difference Coefficients for Hydroacoustic Modeling

    NASA Astrophysics Data System (ADS)

    Preston, L. A.

    2014-12-01

    Responsible utilization of marine renewable energy sources through the use of current energy converter (CEC) and wave energy converter (WEC) devices requires an understanding of the noise generation and propagation from these systems in the marine environment. Acoustic noise produced by rotating turbines, for example, could adversely affect marine animals and human-related marine activities if not properly understood and mitigated. We are utilizing a 3-D finite-difference acoustic simulation code developed at Sandia that can accurately propagate noise in the complex bathymetry in the near-shore to open ocean environment. As part of our efforts to improve computation efficiency in the large, high-resolution domains required in this project, we investigate the effects of using optimized finite-difference coefficients on the accuracy of the simulations. We compare accuracy and runtime of various finite-difference coefficients optimized via criteria such as maximum numerical phase speed error, maximum numerical group speed error, and L-1 and L-2 norms of weighted numerical group and phase speed errors over a given spectral bandwidth. We find that those coefficients optimized for L-1 and L-2 norms are superior in accuracy to those based on maximal error and can produce runtimes of 10% of the baseline case, which uses Taylor Series finite-difference coefficients at the Courant time step limit. We will present comparisons of the results for the various cases evaluated as well as recommendations for utilization of the cases studied. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  10. Multidisciplinary Design Optimization Under Uncertainty: An Information Model Approach (PREPRINT)

    DTIC Science & Technology

    2011-03-01

    and c ∈ R, which is easily solved using the MatLab function fmincon. The reader is cautioned not to optimize over (t, p, c). Our approach requires a...would have to be expanded. The fifteen formulas can serve as the basis for numerical simulations, an easy task using MatLab . 5.3 Simulation of the higher...Design 130, 2008, 081402-1 – 081402-12. [32] M. Loève, ” Fonctions aléatoires du second ordre,” Suplement to P. Lévy, Pro- cessus Stochastiques et

  11. Optimal Determination of Respiratory Airflow Patterns Using a Nonlinear Multicompartment Model for a Lung Mechanics System

    PubMed Central

    Li, Hancao; Haddad, Wassim M.

    2012-01-01

    We develop optimal respiratory airflow patterns using a nonlinear multicompartment model for a lung mechanics system. Specifically, we use classical calculus of variations minimization techniques to derive an optimal airflow pattern for inspiratory and expiratory breathing cycles. The physiological interpretation of the optimality criteria used involves the minimization of work of breathing and lung volume acceleration for the inspiratory phase, and the minimization of the elastic potential energy and rapid airflow rate changes for the expiratory phase. Finally, we numerically integrate the resulting nonlinear two-point boundary value problems to determine the optimal airflow patterns over the inspiratory and expiratory breathing cycles. PMID:22719793

  12. Optimal control of an influenza model with seasonal forcing and age-dependent transmission rates.

    PubMed

    Lee, Jeehyun; Kim, Jungeun; Kwon, Hee-Dae

    2013-01-21

    This study considers an optimal intervention strategy for influenza outbreaks. Variations in the SEIAR model are considered to include seasonal forcing and age structure, and control strategies include vaccination, antiviral treatment, and social distancing such as school closures. We formulate an optimal control problem by minimizing the incidence of influenza outbreaks while considering intervention costs. We examine the effects of delays in vaccine production, seasonal forcing, and age-dependent transmission rates on the optimal control and suggest some optimal strategies through numerical simulations.

  13. Optimal determination of respiratory airflow patterns using a nonlinear multicompartment model for a lung mechanics system.

    PubMed

    Li, Hancao; Haddad, Wassim M

    2012-01-01

    We develop optimal respiratory airflow patterns using a nonlinear multicompartment model for a lung mechanics system. Specifically, we use classical calculus of variations minimization techniques to derive an optimal airflow pattern for inspiratory and expiratory breathing cycles. The physiological interpretation of the optimality criteria used involves the minimization of work of breathing and lung volume acceleration for the inspiratory phase, and the minimization of the elastic potential energy and rapid airflow rate changes for the expiratory phase. Finally, we numerically integrate the resulting nonlinear two-point boundary value problems to determine the optimal airflow patterns over the inspiratory and expiratory breathing cycles.

  14. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  15. Optimality criteria-based topology optimization of a bi-material model for acoustic-structural coupled systems

    NASA Astrophysics Data System (ADS)

    Shang, Linyuan; Zhao, Guozhong

    2016-06-01

    This article investigates topology optimization of a bi-material model for acoustic-structural coupled systems. The design variables are volume fractions of inclusion material in a bi-material model constructed by the microstructure-based design domain method (MDDM). The design objective is the minimization of sound pressure level (SPL) in an interior acoustic medium. Sensitivities of SPL with respect to topological design variables are derived concretely by the adjoint method. A relaxed form of optimality criteria (OC) is developed for solving the acoustic-structural coupled optimization problem to find the optimum bi-material distribution. Based on OC and the adjoint method, a topology optimization method to deal with large calculations in acoustic-structural coupled problems is proposed. Numerical examples are given to illustrate the applications of topology optimization for a bi-material plate under a low single-frequency excitation and an aerospace structure under a low frequency-band excitation, and to prove the efficiency of the adjoint method and the relaxed form of OC.

  16. Investigation of trunk muscle activities during lifting using a multi-objective optimization-based model and intelligent optimization algorithms.

    PubMed

    Ghiasi, Mohammad Sadegh; Arjmand, Navid; Boroushaki, Mehrdad; Farahmand, Farzam

    2016-03-01

    A six-degree-of-freedom musculoskeletal model of the lumbar spine was developed to predict the activity of trunk muscles during light, moderate and heavy lifting tasks in standing posture. The model was formulated into a multi-objective optimization problem, minimizing the sum of the cubed muscle stresses and maximizing the spinal stability index. Two intelligent optimization algorithms, i.e., the vector evaluated particle swarm optimization (VEPSO) and nondominated sorting genetic algorithm (NSGA), were employed to solve the optimization problem. The optimal solution for each task was then found in the way that the corresponding in vivo intradiscal pressure could be reproduced. Results indicated that both algorithms predicted co-activity in the antagonistic abdominal muscles, as well as an increase in the stability index when going from the light to the heavy task. For all of the light, moderate and heavy tasks, the muscles' activities predictions of the VEPSO and the NSGA were generally consistent and in the same order of the in vivo electromyography data. The proposed methodology is thought to provide improved estimations for muscle activities by considering the spinal stability and incorporating the in vivo intradiscal pressure data.

  17. An Optimization Model for Plug-In Hybrid Electric Vehicles

    SciTech Connect

    Malikopoulos, Andreas; Smith, David E

    2011-01-01

    The necessity for environmentally conscious vehicle designs in conjunction with increasing concerns regarding U.S. dependency on foreign oil and climate change have induced significant investment towards enhancing the propulsion portfolio with new technologies. More recently, plug-in hybrid electric vehicles (PHEVs) have held great intuitive appeal and have attracted considerable attention. PHEVs have the potential to reduce petroleum consumption and greenhouse gas (GHG) emissions in the commercial transportation sector. They are especially appealing in situations where daily commuting is within a small amount of miles with excessive stop-and-go driving. The research effort outlined in this paper aims to investigate the implications of motor/generator and battery size on fuel economy and GHG emissions in a medium-duty PHEV. An optimization framework is developed and applied to two different parallel powertrain configurations, e.g., pre-transmission and post-transmission, to derive the optimal design with respect to motor/generator and battery size. A comparison between the conventional and PHEV configurations with equivalent size and performance under the same driving conditions is conducted, thus allowing an assessment of the fuel economy and GHG emissions potential improvement. The post-transmission parallel configuration yields higher fuel economy and less GHG emissions compared to pre-transmission configuration partly attributable to the enhanced regenerative braking efficiency.

  18. Optimal observation network design for conceptual model discrimination and uncertainty reduction

    NASA Astrophysics Data System (ADS)

    Pham, Hai V.; Tsai, Frank T.-C.

    2016-02-01

    This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.

  19. Optimization of a semianalytical ocean color model for global-scale applications.

    PubMed

    Maritorena, Stéphane; Siegel, David A; Peterson, Alan R

    2002-05-20

    Semianalytical (SA) ocean color models have advantages over conventional band ratio algorithms in that multiple ocean properties can be retrieved simultaneously from a single water-leaving radiance spectrum. However, the complexity of SA models has stalled their development, and operational implementation as optimal SA parameter values are hard to determine because of limitations in development data sets and the lack of robust tuning procedures. We present a procedure for optimizing SA ocean color models for global applications. The SA model to be optimized retrieves simultaneous estimates for chlorophyll (Chl) concentration, the absorption coefficient for dissolved and detrital materials [a(cdm)(443)], and the particulate backscatter coefficient [b(bp)(443)] from measurements of the normalized water-leaving radiance spectrum. Parameters for the model are tuned by simulated annealing as the global optimization protocol. We first evaluate the robustness of the tuning method using synthetic data sets, and we then apply the tuning procedure to an in situ data set. With the tuned SA parameters, the accuracy of retrievals found with the globally optimized model (the Garver-Siegel-Maritorena model version 1; hereafter GSM01) is excellent and results are comparable with the current Sea-viewing Wide Field-of-view sensor (SeaWiFS) algorithm for Chl. The advantage of the GSM01 model is that simultaneous retrievals of a(cdm)(443) and b(bp)(443) are made that greatly extend the nature of global applications that can be explored. Current limitations and further developments of the model are discussed.

  20. Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling

    SciTech Connect

    Agarwal, A.; Biegler, L.; Zitney, S.

    2009-01-01

    Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approach to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.

  1. The Bremerton enrollment capacity model: an enrollment capacity model supporting the military health system optimization plan.

    PubMed

    Helmers, S

    2001-12-01

    The Department of Defense has launched several initiatives to improve efficiency and quality of care in the military health system. The goal of empaneling 1,300 to 1,500 patients per primary care manager did not correlate well with Naval Hospital Bremerton's experience and did not accurately account for military-specific requirements. The Bremerton Model Task Force was chartered to assess current business practices, identify areas for improvement, and develop a capacity model reflecting military readiness and residency training requirements. Methods included a 12-month review of patient visits and staff surveys of how providers spent their day, with time-and-motion analysis to verify assumptions. Our capacity results (average, 791 enrollees per primary care manager) demonstrated that objective measures at the local level do not support enrollment to Department of Defense-specified levels. Significant changes in "corporate culture" are necessary to accomplish the military health system goals.

  2. Advanced Modeling Reconciles Counterintuitive Decisions in Lead Optimization.

    PubMed

    Fernández, Ariel; Scott, L Ridgway

    2017-01-06

    Lead optimization (LO) is essential to fulfill the efficacy and safety requirements of drug-based targeted therapy. The ease with which water may be locally removed from around the target protein crucially influences LO decisions. However, inferred binding sites often defy intuition and the resulting LO decisions are often counterintuitive, with nonpolar groups in the drug placed next to polar groups in the target. We first introduce biophysical advances to reconcile these apparent mismatches. We incorporate three-body energy terms that account for the net stabilization of preformed target structures upon removal of interfacial water concurrent with drug binding. These unexplored drug-induced environmental changes enhancing the target electrostatics are validated against drug-target affinity data, yielding superior computational accuracy required to improve drug design.

  3. Local structural modeling for implementation of optimal active damping

    NASA Astrophysics Data System (ADS)

    Blaurock, Carl A.; Miller, David W.

    1993-09-01

    Local controllers are good candidates for active control of flexible structures. Local control generally consists of low order, frequency benign compensators using collocated hardware. Positive real compensators and plant transfer functions ensure that stability margins and performance robustness are high. The typical design consists of an experimentally chosen gain on a fixed form controller such as rate feedback. The resulting compensator performs some combination of damping (dissipating energy) and structural modification (changing the energy flow paths). Recent research into structural impedance matching has shown how to optimize dissipation based on the local behavior of the structure. This paper investigates the possibility of improving performance by influencing global energy flow, using local controllers designed using a global performance metric.

  4. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  5. Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes

    NASA Astrophysics Data System (ADS)

    Felice, Maria V.; Velichko, Alexander; Wilcox, Paul D.; Barden, Tim; Dunhill, Tony

    2015-03-01

    Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.

  6. Optimal sinusoidal modelling of gear mesh vibration signals for gear diagnosis and prognosis

    NASA Astrophysics Data System (ADS)

    Man, Zhihong; Wang, Wenyi; Khoo, Suiyang; Yin, Juliang

    2012-11-01

    In this paper, the synchronous signal average of gear mesh vibration signals is modelled with the multiple modulated sinusoidal representations. The signal model parameters are optimised against the measured signal averages by using the batch learning of the least squares technique. With the optimal signal model, all components of a gear mesh vibration signal, including the amplitude modulations, the phase modulations and the impulse vibration component induced by gear tooth cracking, are identified and analysed with insight of the gear tooth crack development and propagation. In particular, the energy distribution of the impulse vibration signal, extracted from the optimal signal model, provides sufficient information for monitoring and diagnosing the evolution of the tooth cracking process, leading to the prognosis of gear tooth cracking. The new methodologies for gear mesh signal modelling and the diagnosis of the gear tooth fault development and propagation are validated with a set of rig test data, which has shown excellent performance.

  7. Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes

    SciTech Connect

    Felice, Maria V.; Velichko, Alexander Wilcox, Paul D.; Barden, Tim; Dunhill, Tony

    2015-03-31

    Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.

  8. Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations

    NASA Astrophysics Data System (ADS)

    Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.

    2011-12-01

    HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of

  9. An Optimal Hierarchical Decision Model for a Regional Logistics Network with Environmental Impact Consideration

    PubMed Central

    Zhang, Dezhi; Li, Shuangyan

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209

  10. An optimal hierarchical decision model for a regional logistics network with environmental impact consideration.

    PubMed

    Zhang, Dezhi; Li, Shuangyan; Qin, Jin

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.

  11. Acoustic resonators for noise control in enclosures: Modelling, design and optimization

    NASA Astrophysics Data System (ADS)

    Yu, Ganghua

    This work systematically investigates the acoustic interaction between an enclosure and resonators, and establishes systematic design tools based upon the interaction theory to optimize the physical characteristics and the locations of resonators. A general theoretical model is first established to predict the acoustic performance of multiple resonators placed in an acoustic enclosure of arbitrary shape. Analytical solutions for the sound pressure inside the enclosure are obtained when a single resonator is installed, which provide insight into the physics of the acoustic interaction between the enclosure and resonators. The theoretical model is experimentally validated, showing the effectiveness and reliability of the theoretical model. Using the validated acoustic interaction model and the analytical solutions, the internal resistance of a resonator is optimized to improve its performance in a frequency band enclosing acoustic resonances. An energy reduction index is defined to conduct the optimization. The dual process of the energy dissipation and radiation of the resonator is quantified. Optimal resistance and its physical effect on the enclosure-resonator interaction are numerically evaluated and categorized in terms of frequency bandwidths. Predictions on the resonator performance are confirmed by experiments. Comparisons with existing models based on different optimization criteria are also performed. It is shown that the proposed model serves as an effective design tool to determine the optimal internal-resistance of the resonator in a chosen frequency band. Due to the multi-modal coupling, the resonator performance is also affected by its location besides its physical characteristics. When multiple resonators are used, the mutual interaction among resonators leads to the requirement of a systematic optimization tool to determine their locations. In the present work, different optimization methodologies are explored. These include a sequential design

  12. Backtracking search algorithm in CVRP models for efficient solid waste collection and route optimization.

    PubMed

    Akhtar, Mahmuda; Hannan, M A; Begum, R A; Basri, Hassan; Scavino, Edgar

    2017-03-01

    Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts.

  13. Subject-specific planning of femoroplasty: a combined evolutionary optimization and particle diffusion model approach.

    PubMed

    Basafa, Ehsan; Armand, Mehran

    2014-07-18

    A potential effective treatment for prevention of osteoporotic hip fractures is augmentation of the mechanical properties of the femur by injecting it with agents such as (PMMA) bone cement - femoroplasty. The operation, however, is only in research stage and can benefit substantially from computer planning and optimization. We report the results of computational planning and optimization of the procedure for biomechanical evaluation. An evolutionary optimization method was used to optimally place the cement in finite element (FE) models of seven osteoporotic bone specimens. The optimization, with some inter-specimen variations, suggested that areas close to the cortex in the superior and inferior of the neck and supero-lateral aspect of the greater trochanter will benefit from augmentation. We then used a particle-based model for bone cement diffusion simulation to match the optimized pattern, taking into account the limitations of the actual surgery, including limited volume of injection to prevent thermal necrosis. Simulations showed that the yield load can be significantly increased by more than 30%, using only 9 ml of bone cement. This increase is comparable to previous literature reports where gross filling of the bone was employed instead, using more than 40 ml of cement. These findings, along with the differences in the optimized plans between specimens, emphasize the need for subject-specific models for effective planning of femoral augmentation.

  14. An actuator line model simulation with optimal body force projection length scales

    NASA Astrophysics Data System (ADS)

    Martinez-Tossas, Luis; Churchfield, Matthew J.; Meneveau, Charles

    2016-11-01

    In recent work (Martínez-Tossas et al. "Optimal smoothing length scale for actuator line models of wind turbine blades", preprint), an optimal body force projection length-scale for an actuator line model has been obtained. This optimization is based on 2-D aerodynamics and is done by comparing an analytical solution of inviscid linearized flow over a Gaussian body force to the potential flow solution of flow over a Joukowski airfoil. The optimization provides a non-dimensional optimal scale ɛ / c for different Joukowski airfoils, where ɛ is the width of the Gaussian kernel and c is the chord. A Gaussian kernel with different widths in the chord and thickness directions can further reduce the error. The 2-D theory developed is extended by simulating a full scale rotor using the optimal body force projection length scales. Using these values, the tip losses are captured by the LES and thus, no additional explicit tip-loss correction is needed for the actuator line model. The simulation with the optimal values provides excellent agreement with Blade Element Momentum Theory. This research is supported by the National Science Foundation (Grant OISE-1243482, the WINDINSPIRE project).

  15. Subject-Specific Planning of Femoroplasty: A Combined Evolutionary Optimization and Particle Diffusion Model Approach

    PubMed Central

    Basafa, Ehsan; Armand, Mehran

    2014-01-01

    A potential effective treatment for prevention of osteoporotic hip fractures is augmentation of the mechanical properties of the femur by injecting it with agents such as (PMMA) bone cement – femoroplasty. The operation, however, is only in research stage and can benefit substantially from computer planning and optimization. We report the results of computational planning and optimization of the procedure for biomechanical evaluation. An evolutionary optimization method was used to optimally place the cement in finite element (FE) models of seven osteoporotic bone specimens. The optimization, with some inter-specimen variations, suggested that areas close to the cortex in the superior and inferior of the neck and supero-lateral aspect of the greater trochanter will benefit from augmentation. We then used a particle-based model for bone cement diffusion simulation to match the optimized pattern, taking into account the limitations of the actual surgery, including limited volume of injection to prevent thermal necrosis. Simulations showed that the yield load can be significantly increased by more than 30%, using only 9ml of bone cement. This increase is comparable to previous literature reports where gross filling of the bone was employed instead, using more than 40ml of cement. These findings, along with the differences in the optimized plans between specimens, emphasize the need for subject-specific models for effective planning of femoral augmentation. PMID:24856887

  16. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  17. Oyster Creek cycle 10 nodal model parameter optimization study using PSMS

    SciTech Connect

    Dougher, J.D.

    1987-01-01

    The power shape monitoring system (PSMS) is an on-line core monitoring system that uses a three-dimensional nodal code (NODE-B) to perform nodal power calculations and compute thermal margins. The PSMS contains a parameter optimization function that improves the ability of NODE-B to accurately monitor core power distributions. This functions iterates on the model normalization parameters (albedos and mixing factors) to obtain the best agreement between predicted and measured traversing in-core probe (TIP) reading on a statepoint-by-statepoint basis. Following several statepoint optimization runs, an average set of optimized normalization parameters can be determined and can be implemented into the current or subsequent cycle core model for on-line core monitoring. A statistical analysis of 19 high-power steady-state state-points throughout Oyster Creek cycle 10 operation has shown a consistently poor virgin model performance. The normalization parameters used in the cycle 10 NODE-B model were based on a cycle 8 study, which evaluated only Exxon fuel types. The introduction of General Electric (GE) fuel into cycle 10 (172 assemblies) was a significant fuel/core design change that could have altered the optimum set of normalization parameters. Based on the need to evaluate a potential change in the model normalization parameters for cycle 11 and in an attempt to account for the poor cycle 10 model performance, a parameter optimization study was performed.

  18. A hydroeconomic modeling framework for optimal integrated management of forest and water

    NASA Astrophysics Data System (ADS)

    Garcia-Prats, Alberto; del Campo, Antonio D.; Pulido-Velazquez, Manuel

    2016-10-01

    Forests play a determinant role in the hydrologic cycle, with water being the most important ecosystem service they provide in semiarid regions. However, this contribution is usually neither quantified nor explicitly valued. The aim of this study is to develop a novel hydroeconomic modeling framework for assessing and designing the optimal integrated forest and water management for forested catchments. The optimization model explicitly integrates changes in water yield in the stands (increase in groundwater recharge) induced by forest management and the value of the additional water provided to the system. The model determines the optimal schedule of silvicultural interventions in the stands of the catchment in order to maximize the total net benefit in the system. Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs according to stand density and canopy cover were modeled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. In order to illustrate the presented modeling framework, a case study was carried out in a planted pine forest (Pinus halepensis Mill.) located in south-western Valencia province (Spain). The optimized scenario increased groundwater recharge. This novel modeling framework can be used in the design of a "payment for environmental services" scheme in which water beneficiaries could contribute to fund and promote efficient forest management operations.

  19. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  20. Parameter identification of a distributed runoff model by the optimization software Colleo

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Anai, Hirokazu; Iwami, Yoichi

    2015-04-01

    The introduction of Colleo (Collection of Optimization software) is presented and case studies of parameter identification for a distributed runoff model are illustrated. In order to calculate discharge of rivers accurately, a distributed runoff model becomes widely used to take into account various land usage, soil-type and rainfall distribution. Feasibility study of parameter optimization is desired to be done in two steps. The first step is to survey which optimization algorithms are suitable for the problems of interests. The second step is to investigate the performance of the specific optimization algorithm. Most of the previous studies seem to focus on the second step. This study will focus on the first step and complement the previous studies. Many optimization algorithms have been proposed in the computational science field and a large number of optimization software have been developed and opened to the public with practically applicable performance and quality. It is well known that it is important to use suitable algorithms for the problems to obtain good optimization results efficiently. In order to achieve algorithm comparison readily, optimization software is needed with which performance of many algorithms can be compared and can be connected to various simulation software. Colleo is developed to satisfy such needs. Colleo provides a unified user interface to several optimization software such as pyOpt, NLopt, inspyred and R and helps investigate the suitability of optimization algorithms. 74 different implementations of optimization algorithms, Nelder-Mead, Particle Swarm Optimization and Genetic Algorithm, are available with Colleo. The effectiveness of Colleo was demonstrated with the cases of flood events of the Gokase River basin in Japan (1820km2). From 2002 to 2010, there were 15 flood events, in which the discharge exceeded 1000m3/s. The discharge was calculated with the PWRI distributed hydrological model developed by ICHARM. The target