Science.gov

Sample records for optimal models model

  1. Modeling using optimization routines

    NASA Technical Reports Server (NTRS)

    Thomas, Theodore

    1995-01-01

    Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.

  2. Optimization in Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Marsden, Alison L.

    2014-01-01

    Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.

  3. Boiler modeling optimizes sootblowing

    SciTech Connect

    Piboontum, S.J.; Swift, S.M.; Conrad, R.S.

    2005-10-01

    Controlling the cleanliness and limiting the fouling and slagging of heat transfer surfaces are absolutely necessary to optimize boiler performance. The traditional way to clean heat-transfer surfaces is by sootblowing using air, steam, or water at regular intervals. But with the advent of fuel-switching strategies, such as switching to PRB coal to reduce a plant's emissions, the control of heating surface cleanliness has become more problematic for many owners of steam generators. Boiler modeling can help solve that problem. The article describes Babcock & Wilcox's Powerclean modeling system which consists of heating surface models that produce real-time cleanliness indexes. The Heat Transfer Manager (HTM) program is the core of the system, which can be used on any make or model of boiler. A case study is described to show how the system was successfully used at the 1,350 MW Unit 2 of the American Electric Power's Rockport Power Plant in Indiana. The unit fires a blend of eastern bituminous and Powder River Basin coal. 5 figs.

  4. NEMO Oceanic Model Optimization

    NASA Astrophysics Data System (ADS)

    Epicoco, I.; Mocavero, S.; Murli, A.; Aloisio, G.

    2012-04-01

    NEMO is an oceanic model used by the climate community for stand-alone or coupled experiments. Its parallel implementation, based on MPI, limits the exploitation of the emerging computational infrastructures at peta and exascale, due to the weight of communications. As case study we considered the MFS configuration developed at INGV with a resolution of 1/16° tailored on the Mediterranenan Basin. The work is focused on the analysis of the code on the MareNostrum cluster and on the optimization of critical routines. The first performance analysis of the model aimed at establishing how much the computational performance are influenced by the GPFS file system or the local disks and wich is the best domain decomposition. The results highlight that the exploitation of local disks can reduce the wall clock time up to 40% and that the best performance is achieved with a 2D decomposition when the local domain has a square shape. A deeper performance analysis highlights the obc_rad, dyn_spg and tra_adv routines are the most time consuming routines. The obc_rad implements the evaluation of the open boundaries and it has been the first routine to be optimized. The communication pattern implemented in obc_rad routine has been redesigned. Before the introduction of the optimizations all processes were involved in the communication, but only the processes on the boundaries have the actual data to be exchanged and only the data on the boundaries must be exchanged. Moreover the data along the vertical levels are "packed" and sent with only one MPI_send invocation. The overall efficiency increases compared with the original version, as well as the parallel speed-up. The execution time was reduced of about 33.81%. The second phase of optimization involved the SOR solver routine, implementing the Red-Black Successive-Over-Relaxation method. The high frequency of exchanging data among processes represent the most part of the overall communication time. The number of communication is

  5. Pyomo : Python Optimization Modeling Objects.

    SciTech Connect

    Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul

    2010-11-01

    The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.

  6. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  7. Modeling and optimization of cryopreservation.

    PubMed

    D Benson, James

    2015-01-01

    Modeling plays a critical role in understanding the biophysical processes behind cryopreservation. It facilitates understanding of the biophysical and some of the biochemical mechanisms of damage during all phases of cryopreservation including CPA equilibration, cooling, and warming. Modeling also provides a tool for optimization of cryopreservation protocols and has yielded a number of successes in this regard. While modern cryobiological modeling includes very detailed descriptions of the physical phenomena that occur during freezing, including ice growth kinetics and spatial gradients that define heat and mass transport models, here we reduce the complexity and approach only a small but classic subset of these problems. Namely, here we describe the process of building and using a mathematical model of a cell in suspension where spatial homogeneity is assumed for all quantities. We define the models that describe the critical cell quantities used to describe optimal and suboptimal protocols and then give an overview of classical methods of how to determine optimal protocols using these models. PMID:25428003

  8. Optimal piecewise locally linear modeling

    NASA Astrophysics Data System (ADS)

    Harris, Chris J.; Hong, Xia; Feng, M.

    1999-03-01

    Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.

  9. Optimal designs for copula models

    PubMed Central

    Perrone, E.; Müller, W.G.

    2016-01-01

    Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616

  10. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  11. Optimal swimming of model ciliates

    NASA Astrophysics Data System (ADS)

    Michelin, Sebastien; Lauga, Eric

    2010-11-01

    In order to swim at low Reynolds numbers, microorganisms must undergo non-time-reversible shape changes. In ciliary locomotion, this symmetry breaking is achieved through the actuation of many flexible cilia distributed on the surface of the organism. Experimental studies have demonstrated the collective synchronization of neighboring cilia (metachronal waves), whose exact origin is still debated. Here we consider the hydrodynamic energetic cost of ciliary locomotion and consider an axisymmetric envelope model with prescribed tangential surface displacements. We show that the periodic strokes of this model ciliated swimmer that minimize the energy dissipation in the surrounding fluid achieve symmetry-breaking at the organism level through the propagation of wave patterns similar to metachronal waves. We analyze the properties of the optimal strokes, in particular the impact on the swimming performance introduced by a restriction on maximum cilia tip displacement due to the finite cilia length.

  12. Branch strategies - Modeling and optimization

    NASA Technical Reports Server (NTRS)

    Dubey, Pradeep K.; Flynn, Michael J.

    1991-01-01

    The authors provide a common platform for modeling different schemes for reducing the branch-delay penalty in pipelined processors as well as evaluating the associated increased instruction bandwidth. Their objective is twofold: to develop a model for different approaches to the branch problem and to help select an optimal strategy after taking into account additional i-traffic generated by branch strategies. The model presented provides a flexible tool for comparing different branch strategies in terms of the reduction it offers in average branch delay and also in terms of the associated cost of wasted instruction fetches. This additional criterion turns out to be a valuable consideration in choosing between two strategies that perform almost equally. More importantly, it provides a better insight into the expected overall system performance. Simple compiler-support-based low-implementation-cost strategies can be very effective under certain conditions. An active branch prediction scheme based on loop buffers can be as competitive as a branch-target-buffer based strategy.

  13. Optimal Appearance Model for Visual Tracking.

    PubMed

    Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao

    2016-01-01

    Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639

  14. Optimal Appearance Model for Visual Tracking.

    PubMed

    Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao

    2016-01-01

    Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models.

  15. How Optimal Is the Optimization Model?

    ERIC Educational Resources Information Center

    Heine, Bernd

    2013-01-01

    Pieter Muysken's article on modeling and interpreting language contact phenomena constitutes an important contribution.The approach chosen is a top-down one, building on the author's extensive knowledge of all matters relating to language contact. The paper aims at integrating a wide range of factors and levels of social, cognitive, and…

  16. Optimal Decision Making in Neural Inhibition Models

    ERIC Educational Resources Information Center

    van Ravenzwaaij, Don; van der Maas, Han L. J.; Wagenmakers, Eric-Jan

    2012-01-01

    In their influential "Psychological Review" article, Bogacz, Brown, Moehlis, Holmes, and Cohen (2006) discussed optimal decision making as accomplished by the drift diffusion model (DDM). The authors showed that neural inhibition models, such as the leaky competing accumulator model (LCA) and the feedforward inhibition model (FFI), can mimic the…

  17. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  18. A DSN optimal spacecraft scheduling model

    NASA Technical Reports Server (NTRS)

    Webb, W. A.

    1982-01-01

    A computer model is described which uses mixed-integer linear programming to provide optimal DSN spacecraft schedules given a mission set and specified scheduling requirements. A solution technique is proposed which uses Bender's Method and a heuristic starting algorithm.

  19. Modelling and Optimizing Mathematics Learning in Children

    ERIC Educational Resources Information Center

    Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus

    2013-01-01

    This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…

  20. Enhanced index tracking modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. S.; Hj. Jaaman, Saiful Hafizah; Ismail, Hamizun bin

    2013-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. It is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the risk. Enhanced index tracking aims to generate excess return over the return achieved by the index without purchasing all of the stocks that make up the index by establishing an optimal portfolio. The objective of this study is to determine the optimal portfolio composition and performance by using weighted model in enhanced index tracking. Weighted model focuses on the trade-off between the excess return and the risk. The results of this study show that the optimal portfolio for the weighted model is able to outperform the Malaysia market index which is Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  1. Model averaging, optimal inference, and habit formation

    PubMed Central

    FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724

  2. Making models match measurements: Model optimization for morphogen patterning networks

    PubMed Central

    Hengenius, JB; Gribskov, MR; Rundell, AE; Umulis, DM

    2015-01-01

    Mathematical modeling of developmental signaling networks has played an increasingly important role in the identification of regulatory mechanisms by providing a sandbox for hypothesis testing and experiment design. Whether these models consist of an equation with a few parameters or dozens of equations with hundreds of parameters, a prerequisite to model-based discovery is to bring simulated behavior into agreement with observed data via parameter estimation. These parameters provide insight into the system (e.g., enzymatic rate constants describe enzyme properties). Depending on the nature of the model fit desired - from qualitative (relative spatial positions of phosphorylation) to quantitative (exact agreement of spatial position and concentration of gene products) - different measures of data-model mismatch are used to estimate different parameter values, which contain different levels of usable information and/or uncertainty. To facilitate the adoption of modeling as a tool for discovery alongside other tools such as genetics, immunostaining, and biochemistry, careful consideration needs to be given to how well a model fits the available data, what the optimized parameter values mean in a biological context, and how the uncertainty in model parameters and predictions plays into experiment design. The core discussion herein pertains to the quantification of model-to-data agreement, which constitutes the first measure of a model's performance and future utility to the problem at hand. Integration of this experimental data and the appropriate choice of objective measures of data-model agreement will continue to drive modeling forward as a tool that contributes to experimental discovery. The Drosophila melanogaster gap gene system, in which model parameters are optimized against in situ immunofluorescence intensities, demonstrates the importance of error quantification, which is applicable to a wide array of developmental modeling studies. PMID:25016297

  3. Making models match measurements: model optimization for morphogen patterning networks.

    PubMed

    Hengenius, J B; Gribskov, M; Rundell, A E; Umulis, D M

    2014-11-01

    Mathematical modeling of developmental signaling networks has played an increasingly important role in the identification of regulatory mechanisms by providing a sandbox for hypothesis testing and experiment design. Whether these models consist of an equation with a few parameters or dozens of equations with hundreds of parameters, a prerequisite to model-based discovery is to bring simulated behavior into agreement with observed data via parameter estimation. These parameters provide insight into the system (e.g., enzymatic rate constants describe enzyme properties). Depending on the nature of the model fit desired - from qualitative (relative spatial positions of phosphorylation) to quantitative (exact agreement of spatial position and concentration of gene products) - different measures of data-model mismatch are used to estimate different parameter values, which contain different levels of usable information and/or uncertainty. To facilitate the adoption of modeling as a tool for discovery alongside other tools such as genetics, immunostaining, and biochemistry, careful consideration needs to be given to how well a model fits the available data, what the optimized parameter values mean in a biological context, and how the uncertainty in model parameters and predictions plays into experiment design. The core discussion herein pertains to the quantification of model-to-data agreement, which constitutes the first measure of a model's performance and future utility to the problem at hand. Integration of this experimental data and the appropriate choice of objective measures of data-model agreement will continue to drive modeling forward as a tool that contributes to experimental discovery. The Drosophila melanogaster gap gene system, in which model parameters are optimized against in situ immunofluorescence intensities, demonstrates the importance of error quantification, which is applicable to a wide array of developmental modeling studies.

  4. An overview of the optimization modelling applications

    NASA Astrophysics Data System (ADS)

    Singh, Ajay

    2012-10-01

    SummaryThe optimal use of available resources is of paramount importance in the backdrop of the increasing food, fiber, and other demands of the burgeoning global population and the shrinking resources. The optimal use of these resources can be determined by employing an optimization technique. The comprehensive reviews on the use of various programming techniques for the solution of different optimization problems have been provided in this paper. The past reviews are grouped into nine sections based on the solutions of the theme-based real world problems. The sections include: use of optimization modelling for conjunctive use planning, groundwater management, seawater intrusion management, irrigation management, achieving optimal cropping pattern, management of reservoir systems operation, management of resources in arid and semi-arid regions, solid waste management, and miscellaneous uses which comprise, managing problems of hydropower generation and sugar industry. Conclusions are drawn where gaps exist and more research needs to be focused.

  5. Improving Heliospheric Field Models with Optimized Coronal Models

    NASA Astrophysics Data System (ADS)

    Jones, S. I.; Davila, J. M.; Uritsky, V. M.

    2015-12-01

    The Solar Orbiter and Solar Probe Plus missions will travel closer to the sun than any previous mission, collecting unprecedented in situ data. This data can provide insight into coronal structure, energy transport, and evolution in the inner heliosphere. However, in order to take full advantage of this data, researchers need quality models of the inner heliosphere to connect the in situ observations to their coronal and photospheric sources. Developing quality models for this region of space has proved difficult, in part because the only part of the field that is accessible for routine measurement is the photosphere. The photospheric field measurements, though somewhat problematic, are used as boundary conditions for coronal models, which often neglect or over-simplify chromospheric conditions, and these coronal models are then used as boundary conditions to drive heliospheric models. The result is a great deal of uncertainty about the accuracy and reliability of the heliospheric models. Here we present a technique we are developing for improving global coronal magnetic field models by optimizing the models to conform to the field morphology observed in coronal images. This agreement between the coronal model and the basic morphology of the corona is essential for creating accurate heliospheric models. We will present results of early tests of two implementations of this idea, and its first application to real-world data.

  6. Modeling the dynamics of ant colony optimization.

    PubMed

    Merkle, Daniel; Middendorf, Martin

    2002-01-01

    The dynamics of Ant Colony Optimization (ACO) algorithms is studied using a deterministic model that assumes an average expected behavior of the algorithms. The ACO optimization metaheuristic is an iterative approach, where in every iteration, artificial ants construct solutions randomly but guided by pheromone information stemming from former ants that found good solutions. The behavior of ACO algorithms and the ACO model are analyzed for certain types of permutation problems. It is shown analytically that the decisions of an ant are influenced in an intriguing way by the use of the pheromone information and the properties of the pheromone matrix. This explains why ACO algorithms can show a complex dynamic behavior even when there is only one ant per iteration and no competition occurs. The ACO model is used to describe the algorithm behavior as a combination of situations with different degrees of competition between the ants. This helps to better understand the dynamics of the algorithm when there are several ants per iteration as is always the case when using ACO algorithms for optimization. Simulations are done to compare the behavior of the ACO model with the ACO algorithm. Results show that the deterministic model describes essential features of the dynamics of ACO algorithms quite accurately, while other aspects of the algorithms behavior cannot be found in the model. PMID:12227995

  7. Model test optimization using the virtual environment for test optimization

    SciTech Connect

    Klenke, S.E.; Reese, G.M.; Schoof, L.A.; Shierling, C.

    1995-11-01

    We present a software environment integrating analysis and test-based models to support optimal modal test design through a Virtual Environment for Test Optimization (VETO). The VETO assists analysis and test engineers to maximize the value of each modal test. It is particularly advantageous for structural dynamics model reconciliation applications. The VETO enables an engineer to interact with a finite element model of a test object to optimally place sensors and exciters and to investigate the selection of data acquisition parameters needed to conduct a complete modal survey. Additionally, the user can evaluate the use of different types of instrumentation such as filters, amplifiers and transducers for which models are available in the VETO. The dynamic response of most of the virtual instruments (including the device under test) are modeled in the state space domain. Design of modal excitation levels and appropriate test instrumentation are facilitated by the VETO`s ability to simulate such features as unmeasured external inputs, A/D quantization effects, and electronic noise. Measures of the quality of the experimental design, including the Modal Assurance Criterion, and the Normal Mode Indicator Function are available. The VETO also integrates tools such as Effective Independence and minamac to assist in selection of optimal sensor locations. The software is designed about three distinct modules: (1) a main controller and GUI written in C++, (2) a visualization model, taken from FEAVR, running under AVS, and (3) a state space model and time integration module built in SIMULINK. These modules are designed to run as separate processes on interconnected machines.

  8. Modeling optimal mineral nutrition for hazelnut micropropagation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Micropropagation of hazelnut (Corylus avellana L.) is typically difficult due to the wide variation in response among cultivars. This study was designed to overcome that difficulty by modeling the optimal mineral nutrients for micropropagation of C. avellana selections using a response surface desig...

  9. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  10. Generalized mathematical models in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, Panos Y.; Rao, J. R. Jagannatha

    1989-01-01

    The theory of optimality conditions of extremal problems can be extended to problems continuously deformed by an input vector. The connection between the sensitivity, well-posedness, stability and approximation of optimization problems is steadily emerging. The authors believe that the important realization here is that the underlying basis of all such work is still the study of point-to-set maps and of small perturbations, yet what has been identified previously as being just related to solution procedures is now being extended to study modeling itself in its own right. Many important studies related to the theoretical issues of parametric programming and large deformation in nonlinear programming have been reported in the last few years, and the challenge now seems to be in devising effective computational tools for solving these generalized design optimization models.

  11. Optimal hierarchies for fuzzy object models

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Udupa, Jayaram K.

    2013-03-01

    In radiologic clinical practice, the analysis underlying image examinations are qualitative, descriptive, and to some extent subjective. Quantitative radiology (QR) is valuable in clinical radiology. Computerized automatic anatomy recognition (AAR) is an essential step toward that goal. AAR is a body-wide organ recognition strategy. The AAR framework is based on fuzzy object models (FOMs) wherein the models for the different objects are encoded in a hierarchy. We investigated ways of optimally designing the hierarchy tree while building the models. The hierarchy among the objects is a core concept of AAR. The parent-offspring relationships have two main purposes in this context: (i) to bring into AAR more understanding and knowledge about the form, geography, and relationships among objects, and (ii) to foster guidance to object recognition and object delineation. In this approach, the relationship among objects is represented by a graph, where the vertices are the objects (organs) and the edges connect all pairs of vertices into a complete graph. Each pair of objects is assigned a weight described by the spatial distance between them, their intensity profile differences, and their correlation in size, all estimated over a population. The optimal hierarchy tree is obtained by the shortest-path algorithm as an optimal spanning tree. To evaluate the optimal hierarchies, we have performed some preliminary tests involving the subsequent recognition step. The body region used for initial investigation was the thorax.

  12. Computer modeling for optimal placement of gloveboxes

    SciTech Connect

    Hench, K.W.; Olivas, J.D.; Finch, P.R.

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  13. Probabilistic computer model of optimal runway turnoffs

    NASA Technical Reports Server (NTRS)

    Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.

    1985-01-01

    Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.

  14. Global optimization of bilinear engineering design models

    SciTech Connect

    Grossmann, I.; Quesada, I.

    1994-12-31

    Recently Quesada and Grossmann have proposed a global optimization algorithm for solving NLP problems involving linear fractional and bilinear terms. This model has been motivated by a number of applications in process design. The proposed method relies on the derivation of a convex NLP underestimator problem that is used within a spatial branch and bound search. This paper explores the use of alternative bounding approximations for constructing the underestimator problem. These are applied in the global optimization of problems arising in different engineering areas and for which different relaxations are proposed depending on the mathematical structure of the models. These relaxations include linear and nonlinear underestimator problems. Reformulations that generate additional estimator functions are also employed. Examples from process design, structural design, portfolio investment and layout design are presented.

  15. Combined optimization model for sustainable energization strategy

    NASA Astrophysics Data System (ADS)

    Abtew, Mohammed Seid

    Access to energy is a foundation to establish a positive impact on multiple aspects of human development. Both developed and developing countries have a common concern of achieving a sustainable energy supply to fuel economic growth and improve the quality of life with minimal environmental impacts. The Least Developing Countries (LDCs), however, have different economic, social, and energy systems. Prevalence of power outage, lack of access to electricity, structural dissimilarity between rural and urban regions, and traditional fuel dominance for cooking and the resultant health and environmental hazards are some of the distinguishing characteristics of these nations. Most energy planning models have been designed for developed countries' socio-economic demographics and have missed the opportunity to address special features of the poor countries. An improved mixed-integer programming energy-source optimization model is developed to address limitations associated with using current energy optimization models for LDCs, tackle development of the sustainable energization strategies, and ensure diversification and risk management provisions in the selected energy mix. The Model predicted a shift from traditional fuels reliant and weather vulnerable energy source mix to a least cost and reliable modern clean energy sources portfolio, a climb on the energy ladder, and scored multifaceted economic, social, and environmental benefits. At the same time, it represented a transition strategy that evolves to increasingly cleaner energy technologies with growth as opposed to an expensive solution that leapfrogs immediately to the cleanest possible, overreaching technologies.

  16. Parameter optimization in S-system models

    PubMed Central

    Vilela, Marco; Chou, I-Chun; Vinga, Susana; Vasconcelos, Ana Tereza R; Voit, Eberhard O; Almeida, Jonas S

    2008-01-01

    Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well. PMID:18416837

  17. Optimal feeding vs. optimal swimming of model ciliates

    NASA Astrophysics Data System (ADS)

    Michelin, Sebastien; Lauga, Eric

    2011-11-01

    To swim at low Reynolds numbers, micro-organisms create flow fields that modify the transport of nutrients around them, thereby impacting their feeding rate. When the nutrient is a passive scalar, the feeding rate of a given micro-swimmer greatly varies with the Péclet number (Pe) a relative measure of advection and diffusion in the nutrient transport, that strongly depends on the nutrient species considered. Using an axisymmetric envelope model for ciliary locomotion and adjoint-based optimization, we determine the swimming (or possibly non-swimming) strokes maximizing the nutrient uptake by the micro-swimmer for a given energy cost. We show that, unlike the feeding rate, this optimal feeding stroke is essentially independent of the Péclet number (and, therefore, of the nutrient considered) and is identical to the stroke with maximum swimming efficiency.

  18. Modeling, Analysis, and Optimization Issues for Large Space Structures

    NASA Technical Reports Server (NTRS)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  19. Code Differentiation for Hydrodynamic Model Optimization

    SciTech Connect

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  20. Optimal evolution models for quantum tomography

    NASA Astrophysics Data System (ADS)

    Czerwiński, Artur

    2016-02-01

    The research presented in this article concerns the stroboscopic approach to quantum tomography, which is an area of science where quantum physics and linear algebra overlap. In this article we introduce the algebraic structure of the parametric-dependent quantum channels for 2-level and 3-level systems such that the generator of evolution corresponding with the Kraus operators has no degenerate eigenvalues. In such cases the index of cyclicity of the generator is equal to 1, which physically means that there exists one observable the measurement of which performed a sufficient number of times at distinct instants provides enough data to reconstruct the initial density matrix and, consequently, the trajectory of the state. The necessary conditions for the parameters and relations between them are introduced. The results presented in this paper seem to have considerable potential applications in experiments due to the fact that one can perform quantum tomography by conducting only one kind of measurement. Therefore, the analyzed evolution models can be considered optimal in the context of quantum tomography. Finally, we introduce some remarks concerning optimal evolution models in the case of n-dimensional Hilbert space.

  1. Designing Sensor Networks by a Generalized Highly Optimized Tolerance Model

    NASA Astrophysics Data System (ADS)

    Miyano, Takaya; Yamakoshi, Miyuki; Higashino, Sadanori; Tsutsui, Takako

    A variant of the highly optimized tolerance model is applied to a toy problem of bioterrorism to determine the optimal arrangement of hypothetical bio-sensors to avert epidemic outbreak. Nonlinear loss function is utilized in searching the optimal structure of the sensor network. The proposed method successfully averts disastrously large events, which can not be achieved by the original highly optimized tolerance model.

  2. Application of simulation models for the optimization of business processes

    NASA Astrophysics Data System (ADS)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  3. Model Identification for Optimal Diesel Emissions Control

    SciTech Connect

    Stevens, Andrew J.; Sun, Yannan; Song, Xiaobo; Parker, Gordon

    2013-06-20

    In this paper we develop a model based con- troller for diesel emission reduction using system identification methods. Specifically, our method minimizes the downstream readings from a production NOx sensor while injecting a minimal amount of urea upstream. Based on the linear quadratic estimator we derive the closed form solution to a cost function that accounts for the case some of the system inputs are not controllable. Our cost function can also be tuned to trade-off between input usage and output optimization. Our approach performs better than a production controller in simulation. Our NOx conversion efficiency was 92.7% while the production controller achieved 92.4%. For NH3 conversion, our efficiency was 98.7% compared to 88.5% for the production controller.

  4. Optimized Markov state models for metastable systems

    NASA Astrophysics Data System (ADS)

    Guarnera, Enrico; Vanden-Eijnden, Eric

    2016-07-01

    A method is proposed to identify target states that optimize a metastability index amongst a set of trial states and use these target states as milestones (or core sets) to build Markov State Models (MSMs). If the optimized metastability index is small, this automatically guarantees the accuracy of the MSM, in the sense that the transitions between the target milestones is indeed approximately Markovian. The method is simple to implement and use, it does not require that the dynamics on the trial milestones be Markovian, and it also offers the possibility to partition the system's state-space by assigning every trial milestone to the target milestones it is most likely to visit next and to identify transition state regions. Here the method is tested on the Gly-Ala-Gly peptide, where it is shown to correctly identify the expected metastable states in the dihedral angle space of the molecule without a priori information about these states. It is also applied to analyze the folding landscape of the Beta3s mini-protein, where it is shown to identify the folded basin as a connecting hub between an helix-rich region, which is entropically stabilized, and a beta-rich region, which is energetically stabilized and acts as a kinetic trap.

  5. Optimization approaches to nonlinear model predictive control

    SciTech Connect

    Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)

    1991-01-01

    With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.

  6. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    SciTech Connect

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  7. Modeling and Optimizing RF Multipole Ion Traps

    NASA Astrophysics Data System (ADS)

    Fanghaenel, Sven; Asvany, Oskar; Schlemmer, Stephan

    2016-06-01

    Radio frequency (rf) ion traps are very well suited for spectroscopy experiments thanks to the long time storage of the species of interest in a well defined volume. The electrical potential of the ion trap is determined by the geometry of its electrodes and the applied voltages. In order to understand the behavior of trapped ions in realistic multipole traps it is necessary to characterize these trapping potentials. Commercial programs like SIMION or COMSOL, employing the finite difference and/or finite element method, are often used to model the electrical fields of the trap in order to design traps for various purposes, e.g. introducing light from a laser into the trap volume. For a controlled trapping of ions, e.g. for low temperature trapping, the time dependent electrical fields need to be known to high accuracy especially at the minimum of the effective (mechanical) potential. The commercial programs are not optimized for these applications and suffer from a number of limitations. Therefore, in our approach the boundary element method (BEM) has been employed in home-built programs to generate numerical solutions of real trap geometries, e.g. from CAD drawings. In addition the resulting fields are described by appropriate multipole expansions. As a consequence, the quality of a trap can be characterized by a small set of multipole parameters which are used to optimize the trap design. In this presentation a few example calculations will be discussed. In particular the accuracy of the method and the benefits of describing the trapping potentials via multipole expansions will be illustrated. As one important application heating effects of cold ions arising from non-ideal multipole fields can now be understood as a consequence of imperfect field configurations.

  8. Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.

    1998-01-01

    This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.

  9. A Bayesian A-optimal and model robust design criterion.

    PubMed

    Zhou, Xiaojie; Joseph, Lawrence; Wolfson, David B; Bélisle, Patrick

    2003-12-01

    Suppose that the true model underlying a set of data is one of a finite set of candidate models, and that parameter estimation for this model is of primary interest. With this goal, optimal design must depend on a loss function across all possible models. A common method that accounts for model uncertainty is to average the loss over all models; this is the basis of what is known as Läuter's criterion. We generalize Läuter's criterion and show that it can be placed in a Bayesian decision theoretic framework, by extending the definition of Bayesian A-optimality. We use this generalized A-optimality to find optimal design points in an environmental safety setting. In estimating the smallest detectable trace limit in a water contamination problem, we obtain optimal designs that are quite different from those suggested by standard A-optimality.

  10. Integrative systems modeling and multi-objective optimization

    EPA Science Inventory

    This presentation presents a number of algorithms, tools, and methods for utilizing multi-objective optimization within integrated systems modeling frameworks. We first present innovative methods using a genetic algorithm to optimally calibrate the VELMA and SWAT ecohydrological ...

  11. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  12. Quantitative modeling and optimization of magnetic tweezers.

    PubMed

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H

    2009-06-17

    Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply > or = 40 pN stretching forces on approximately 1-microm tethered beads. PMID:19527664

  13. Hybrid and adaptive meta-model-based global optimization

    NASA Astrophysics Data System (ADS)

    Gu, J.; Li, G. Y.; Dong, Z.

    2012-01-01

    As an efficient and robust technique for global optimization, meta-model-based search methods have been increasingly used in solving complex and computation intensive design optimization problems. In this work, a hybrid and adaptive meta-model-based global optimization method that can automatically select appropriate meta-modelling techniques during the search process to improve search efficiency is introduced. The search initially applies three representative meta-models concurrently. Progress towards a better performing model is then introduced by selecting sample data points adaptively according to the calculated values of the three meta-models to improve modelling accuracy and search efficiency. To demonstrate the superior performance of the new algorithm over existing search methods, the new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization example involving vehicle crash simulation. The method is particularly suitable for design problems involving computation intensive, black-box analyses and simulations.

  14. Optimal estimator model for human spatial orientation

    NASA Technical Reports Server (NTRS)

    Borah, J.; Young, L. R.; Curry, R. E.

    1979-01-01

    A model is being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli. Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors. Central nervous system function is then modeled as a steady-state Kalman filter which blends information from the various sensors to form an estimate of spatial orientation. Where necessary, this linear central estimator has been augmented with nonlinear elements to reflect more accurately some highly nonlinear human response characteristics. Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended. Possible means are described for extending the model to better represent the active pilot with varying skill and work load levels.

  15. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  16. Visual prosthesis wireless energy transfer system optimal modeling

    PubMed Central

    2014-01-01

    Background Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. Methods On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling’s more accuracy for the actual application. Results The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. Conclusions The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system’s further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application. PMID:24428906

  17. A MILP-Model for the Optimization of Transports

    NASA Astrophysics Data System (ADS)

    Björk, Kaj-Mikael

    2010-09-01

    This paper presents a work in developing a mathematical model for the optimization of transports. The decisions to be made are routing decisions, truck assignment and the determination of the pickup order for a set of loads and available trucks. The model presented takes these aspects into account simultaneously. The MILP model is implemented in the Microsoft Excel environment, utilizing the LP-solve freeware as the optimization engine and Visual Basic for Applications as the modeling interface.

  18. Optimal Scaling of Interaction Effects in Generalized Linear Models

    ERIC Educational Resources Information Center

    van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.

    2009-01-01

    Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…

  19. Integrating programming features with an algebraic modeling language for optimization

    SciTech Connect

    Fourer, R.; Gay, D.

    1994-12-31

    In describing optimization models to a computer, programming is best avoided. In using models as part of a larger scheme, however, programs must be written to specify how information is passed between models. We describe a programming environment for this purpose that has been integrated with the AMPL modeling language.

  20. On Optimal Input Design and Model Selection for Communication Channels

    SciTech Connect

    Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  1. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  2. Stochastic Robust Mathematical Programming Model for Power System Optimization

    SciTech Connect

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  3. Phase contrast radiography: Image modeling and optimization

    NASA Astrophysics Data System (ADS)

    Arhatari, Benedicta D.; Mancuso, Adrian P.; Peele, Andrew G.; Nugent, Keith A.

    2004-12-01

    We consider image formation for the phase-contrast radiography technique where the radiation source is extended and spatially incoherent. A model is developed for this imaging process which allows us to define an objective filtering criterion that can be applied to the recovery of quantitative phase images from data obtained at different propagation distances. We test our image model with experimental x-ray data. We then apply our filter to experimental neutron phase radiography data and demonstrate improved image quality.

  4. Mathematical Model For Engineering Analysis And Optimization

    NASA Technical Reports Server (NTRS)

    Sobieski, Jaroslaw

    1992-01-01

    Computational support for engineering design process reveals behavior of designed system in response to external stimuli; and finds out how behavior modified by changing physical attributes of system. System-sensitivity analysis combined with extrapolation forms model of design complementary to model of behavior, capable of direct simulation of effects of changes in design variables. Algorithms developed for this method applicable to design of large engineering systems, especially those consisting of several subsystems involving many disciplines.

  5. An optimization strategy for a biokinetic model of inhaled radionuclides

    SciTech Connect

    Shyr, L.J.; Griffith, W.C.; Boecker, B.B. )

    1991-04-01

    Models for material disposition and dosimetry involve predictions of the biokinetics of the material among compartments representing organs and tissues in the body. Because of a lack of human data for most toxicants, many of the basic data are derived by modeling the results obtained from studies using laboratory animals. Such a biomathematical model is usually developed by adjusting the model parameters to make the model predictions match the measured retention and excretion data visually. The fitting process can be very time-consuming for a complicated model, and visual model selections may be subjective and easily biased by the scale or the data used. Due to the development of computerized optimization methods, manual fitting could benefit from an automated process. However, for a complicated model, an automated process without an optimization strategy will not be efficient, and may not produce fruitful results. In this paper, procedures for, and implementation of, an optimization strategy for a complicated mathematical model is demonstrated by optimizing a biokinetic model for 144Ce in fused aluminosilicate particles inhaled by beagle dogs. The optimized results using SimuSolv were compared to manual fitting results obtained previously using the model simulation software GASP. Also, statistical criteria provided by SimuSolv, such as likelihood function values, were used to help or verify visual model selections.

  6. Multipurpose optimization models for high level waste vitrification

    SciTech Connect

    Hoza, M.

    1994-08-01

    Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification.

  7. Geomagnetic field modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Five individual 5 year mini-batch geomagnetic models were generated and two computer programs were developed to process the models. The first program computes statistics (mean sigma, weighted sigma) on the changes in the first derivatives (linear terms) of the spherical harmonic coefficients between mini-batches. The program ran successfully. The statistics are intended for use in computing the state noise matrix required in the information filter. The second program is the information filter. Most subroutines used in the filter were tested, but the coefficient statistics must be analyzed before the filter is run.

  8. COBRA-SFS modifications and cask model optimization

    SciTech Connect

    Rector, D.R.; Michener, T.E.

    1989-01-01

    Spent-fuel storage systems are complex systems and developing a computational model for one can be a difficult task. The COBRA-SFS computer code provides many capabilities for modeling the details of these systems, but these capabilities can also allow users to specify a more complex model than necessary. This report provides important guidance to users that dramatically reduces the size of the model while maintaining the accuracy of the calculation. A series of model optimization studies was performed, based on the TN-24P spent-fuel storage cask, to determine the optimal model geometry. Expanded modeling capabilities of the code are also described. These include adding fluid shear stress terms and a detailed plenum model. The mathematical models for each code modification are described, along with the associated verification results. 22 refs., 107 figs., 7 tabs.

  9. Simulation of Evapotranspiration using an Optimality-based Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, Lajiao

    2014-05-01

    Accurate estimation of evapotranspiration (ET) is essential in understanding the effect of climate change and human activities on ecosystem and water resource. As an important tool for ET estimation, most of the traditional hydrological or ecohydrological models treat ET as a physical process, controlled by energy, vapor, pressure and turbulence. It is at times questionable as transpiration, major component of ET, is biological activity closely linked to photosynthesis by stomatal conductivity. Optimality-based ecohydrological models consider the mutual interaction of ET and photosynthesis based on optimality principle. However, as a rising generation of ecohydrological models, so far there are only a few applications of the optimality-based model in different ecosystems. The ability and reliability of this kind of models for ecohydrological modeling need to be validated in more ecosystems. The objective of this study is to validate the optimality hypothesis for water-limited ecosystem. To achieve this, the study applied an optimality-based model Vegetation Optimality Model (VOM) to simulate ET and its components based on optimality principle. The model is applied in a semiarid watershed. The simulated ET and soil waster were compared with long term measurement data in Kendall and Lcukyhill sites in the watershed. The result showed that the temporal variations of simulated ET and soil water are in good agreement with observed data. Temporal dynamic of soil evaporation and transpiration and their response to precipitation events can be well captured with the model. This could come to a conclusion the optimality-based ecohydrological model could be a potential approach to simulate ET.

  10. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  11. An integrated model for optimizing weld quality

    SciTech Connect

    Zacharia, T.; Radhakrishnan, B.; Paul, A.J.; Cheng, C.

    1995-06-01

    Welding has evolved in the last few decades from almost an empirical art to an activity embodying the most advanced tools of, various basic and applied sciences. Significant progress has been made in understanding the welding process and welded materials. The improved knowledge base has been useful in automation and process control. In view of the large number of variables involved, creating an adequately large database to understand and control the welding process is expensive and time consuming, if not impractical. A recourse is to simulate welding processes through a set of mathematical equations representing the essential physical processes of welding. Results obtained from the phenomenological models depend crucially on the quality of the physical relations in the models and the trustworthiness of input data. In this paper, recent advances in the mathematical modeling of fundamental phenomena in welds are summarized. State of the art mathematical models, advances in computational techniques, emerging high performance computers, and experimental validation techniques have provided significant insight into the fundamental factors that control the development of the weldment. Current status and scientific issues in heat and fluid flow in welds, heat source metal interaction, and solidification microstructure are assessed. Future research areas of major importance for understanding the fundamental phenomena in weld behavior are identified.

  12. Design Optimization of Coronary Stent Based on Finite Element Models

    PubMed Central

    Qiu, Tianshuang; Zhu, Bao; Wu, Jinying

    2013-01-01

    This paper presents an effective optimization method using the Kriging surrogate model combing with modified rectangular grid sampling to reduce the stent dogboning effect in the expansion process. An infilling sampling criterion named expected improvement (EI) is used to balance local and global searches in the optimization iteration. Four commonly used finite element models of stent dilation were used to investigate stent dogboning rate. Thrombosis models of three typical shapes are built to test the effectiveness of optimization results. Numerical results show that two finite element models dilated by pressure applied inside the balloon are available, one of which with the artery and plaque can give an optimal stent with better expansion behavior, while the artery and plaque unincluded model is more efficient and takes a smaller amount of computation. PMID:24222743

  13. Optimization Research of Generation Investment Based on Linear Programming Model

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  14. Conditioning of Model Identification Task in Immune Inspired Optimizer SILO

    NASA Astrophysics Data System (ADS)

    Wojdan, K.; Swirski, K.; Warchol, M.; Maciorowski, M.

    2009-10-01

    Methods which provide good conditioning of model identification task in immune inspired, steady-state controller SILO (Stochastic Immune Layer Optimizer) are presented in this paper. These methods are implemented in a model based optimization algorithm. The first method uses a safe model to assure that gains of the process's model can be estimated. The second method is responsible for elimination of potential linear dependences between columns of observation matrix. Moreover new results from one of SILO implementation in polish power plant are presented. They confirm high efficiency of the presented solution in solving technical problems.

  15. Life cycle optimization of automobile replacement: model and application.

    PubMed

    Kim, Hyung Chul; Keoleian, Gregory A; Grande, Darby E; Bean, James C

    2003-12-01

    Although recent progress in automotive technology has reduced exhaust emissions per mile for new cars, the continuing use of inefficient, higher-polluting old cars as well as increasing vehicle miles driven are undermining the benefits of this progress. As a way to address the "inefficient old vehicle" contribution to this problem, a novel life cycle optimization (LCO) model is introduced and applied to the automobile replacement policy question. The LCO model determines optimal vehicle lifetimes, accounting for technology improvements of new models while considering deteriorating efficiencies of existing models. Life cycle inventories for different vehicle models that represent materials production, manufacturing, use, maintenance, and end-of-life environmental burdens are required as inputs to the LCO model. As a demonstration, the LCO model was applied to mid-sized passenger car models between 1985 and 2020. An optimization was conducted to minimize cumulative carbon monoxide (CO), non-methane hydrocarbon (NMHC), oxides of nitrogen (NOx), carbon dioxide (CO2), and energy use over the time horizon (1985-2020). For CO, NMHC, and NOx pollutants with 12000 mi of annual mileage, automobile lifetimes ranging from 3 to 6 yr are optimal for the 1980s and early 1990s model years while the optimal lifetimes are expected to be 7-14 yr for model year 2000s and beyond. On the other hand, a lifetime of 18 yr minimizes cumulative energy and CO2 based on driving 12000 miles annually. Optimal lifetimes are inversely correlated to annual vehicle mileage, especially for CO, NMHC, and NOx emissions. On the basis of the optimization results, policies improving durability of emission controls, retiring high-emitting vehicles, and improving fuel economies are discussed.

  16. First-Order Frameworks for Managing Models in Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natlia M.; Lewis, Robert Michael

    2000-01-01

    Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.

  17. Modeling, Instrumentation, Automation, and Optimization of Water Resource Recovery Facilities.

    PubMed

    Sweeney, Michael W; Kabouris, John C

    2016-10-01

    A review of the literature published in 2015 on topics relating to water resource recovery facilities (WRRF) in the areas of modeling, automation, measurement and sensors and optimization of wastewater treatment (or water resource reclamation) is presented. PMID:27620091

  18. An optimization model of a New Zealand dairy farm.

    PubMed

    Doole, Graeme J; Romera, Alvaro J; Adler, Alfredo A

    2013-04-01

    Optimization models are a key tool for the analysis of emerging policies, prices, and technologies within grazing systems. A detailed, nonlinear optimization model of a New Zealand dairy farming system is described. This framework is notable for its inclusion of pasture residual mass, pasture utilization, and intake regulation as key management decisions. Validation of the model shows that the detailed representation of key biophysical relationships in the model provides an enhanced capacity to provide reasonable predictions outside of calibrated scenarios. Moreover, the flexibility of management plans in the model enhances its stability when faced with significant perturbations. In contrast, the inherent rigidity present in a less-detailed linear programming model is shown to limit its capacity to provide reasonable predictions away from the calibrated baseline. A sample application also demonstrates how the model can be used to identify pragmatic strategies to reduce greenhouse gas emissions. PMID:23415534

  19. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  20. Research on web performance optimization principles and models

    NASA Astrophysics Data System (ADS)

    Wang, Xin

    2013-03-01

    The Internet high speed development, causes Web the optimized question to be getting more and more prominent, therefore the Web performance optimizes into inevitably. the first principle of Web Performance Optimization is to understand, to know that income will have to pay, and return is diminishing; Simultaneously the probability will decrease Web the performance, and will start from the highest level to optimize obtained biggest. Web Technical models to improve the performance are: sharing costs, high-speed caching, profiles, parallel processing, simplified treatment. Based on this study, given the crucial Web performance optimization recommendations, which improve the performance of Web usage, accelerate the efficient use of Internet has an important significance.

  1. Reducing long-term remedial costs by transport modeling optimization.

    PubMed

    Becker, David; Minsker, Barbara; Greenwald, Robert; Zhang, Yan; Harre, Karla; Yager, Kathleen; Zheng, Chunmiao; Peralta, Richard

    2006-01-01

    The Department of Defense (DoD) Environmental Security Technology Certification Program and the Environmental Protection Agency sponsored a project to evaluate the benefits and utility of contaminant transport simulation-optimization algorithms against traditional (trial and error) modeling approaches. Three pump-and-treat facilities operated by the DoD were selected for inclusion in the project. Three optimization formulations were developed for each facility and solved independently by three modeling teams (two using simulation-optimization algorithms and one applying trial-and-error methods). The results clearly indicate that simulation-optimization methods are able to search a wider range of well locations and flow rates and identify better solutions than current trial-and-error approaches. The solutions found were 5% to 50% better than those obtained using trial-and-error (measured using optimal objective function values), with an average improvement of approximately 20%. This translated into potential savings ranging from 600,000 dollars to 10,000,000 dollars for the three sites. In nearly all cases, the cost savings easily outweighed the costs of the optimization. To reduce computational requirements, in some cases the simulation-optimization groups applied multiple mathematical algorithms, solved a series of modified subproblems, and/or fit "meta-models" such as neural networks or regression models to replace time-consuming simulation models in the optimization algorithm. The optimal solutions did not account for the uncertainties inherent in the modeling process. This project illustrates that transport simulation-optimization techniques are practical for real problems. However, applying the techniques in an efficient manner requires expertise and should involve iterative modification to the formulations based on interim results. PMID:17087758

  2. Reducing long-term remedial costs by transport modeling optimization.

    PubMed

    Becker, David; Minsker, Barbara; Greenwald, Robert; Zhang, Yan; Harre, Karla; Yager, Kathleen; Zheng, Chunmiao; Peralta, Richard

    2006-01-01

    The Department of Defense (DoD) Environmental Security Technology Certification Program and the Environmental Protection Agency sponsored a project to evaluate the benefits and utility of contaminant transport simulation-optimization algorithms against traditional (trial and error) modeling approaches. Three pump-and-treat facilities operated by the DoD were selected for inclusion in the project. Three optimization formulations were developed for each facility and solved independently by three modeling teams (two using simulation-optimization algorithms and one applying trial-and-error methods). The results clearly indicate that simulation-optimization methods are able to search a wider range of well locations and flow rates and identify better solutions than current trial-and-error approaches. The solutions found were 5% to 50% better than those obtained using trial-and-error (measured using optimal objective function values), with an average improvement of approximately 20%. This translated into potential savings ranging from 600,000 dollars to 10,000,000 dollars for the three sites. In nearly all cases, the cost savings easily outweighed the costs of the optimization. To reduce computational requirements, in some cases the simulation-optimization groups applied multiple mathematical algorithms, solved a series of modified subproblems, and/or fit "meta-models" such as neural networks or regression models to replace time-consuming simulation models in the optimization algorithm. The optimal solutions did not account for the uncertainties inherent in the modeling process. This project illustrates that transport simulation-optimization techniques are practical for real problems. However, applying the techniques in an efficient manner requires expertise and should involve iterative modification to the formulations based on interim results.

  3. An Optimality-Based Fully-Distributed Watershed Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, L., Jr.

    2015-12-01

    Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial

  4. Optimal vaccination policies for an SIR model with limited resources.

    PubMed

    Zhou, Yinggao; Yang, Kuan; Zhou, Kai; Liang, Yiting

    2014-06-01

    The purpose of the paper is to use analytical method and optimization tool to suggest a vaccination program intensity for a basic SIR epidemic model with limited resources for vaccination. We show that there are two different scenarios for optimal vaccination strategies, and obtain analytical solutions for the optimal control problem that minimizes the total cost of disease under the assumption of daily vaccine supply being limited. These solutions and their corresponding optimal control policies are derived explicitly in terms of initial conditions, model parameters and resources for vaccination. With sufficient resources, the optimal control strategy is the normal Bang-Bang control. However, with limited resources, the optimal control strategy requires to switch to time-variant vaccination.

  5. Portfolio optimization for index tracking modelling in Malaysia stock market

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Ismail, Hamizun

    2016-06-01

    Index tracking is an investment strategy in portfolio management which aims to construct an optimal portfolio to generate similar mean return with the stock market index mean return without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using the optimization model which adopts regression approach in tracking the benchmark stock market index return. In this study, the data consists of weekly price of stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2013. The results of this study show that the optimal portfolio is able to track FBMKLCI Index at minimum tracking error of 1.0027% with 0.0290% excess mean return over the mean return of FBMKLCI Index. The significance of this study is to construct the optimal portfolio using optimization model which adopts regression approach in tracking the stock market index without purchasing all index components.

  6. Optimization of a new mathematical model for bacterial growth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this research is to optimize a new mathematical equation as a primary model to describe the growth of bacteria under constant temperature conditions. An optimization algorithm was used in combination with a numerical (Runge-Kutta) method to solve the differential form of the new gr...

  7. Optimizing Classroom Acoustics Using Computer Model Studies.

    ERIC Educational Resources Information Center

    Reich, Rebecca; Bradley, John

    1998-01-01

    Investigates conditions relating to the maximum useful-to-detrimental sound ratios present in classrooms and determining the optimum conditions for speech intelligibility. Reveals that speech intelligibility is more strongly influenced by ambient noise levels and that the optimal location for sound absorbing material is on a classroom's upper…

  8. Groundwater modeling and remedial optimization design using graphical user interfaces

    SciTech Connect

    Deschaine, L.M.

    1997-05-01

    The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.

  9. Optimal vaccination and treatment of an epidemic network model

    NASA Astrophysics Data System (ADS)

    Chen, Lijuan; Sun, Jitao

    2014-08-01

    In this Letter, we firstly propose an epidemic network model incorporating two controls which are vaccination and treatment. For the constant controls, by using Lyapunov function, global stability of the disease-free equilibrium and the endemic equilibrium of the model is investigated. For the non-constant controls, by using the optimal control strategy, we discuss an optimal strategy to minimize the total number of the infected and the cost associated with vaccination and treatment. Table 1 and Figs. 1-5 are presented to show the global stability and the efficiency of this optimal control.

  10. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    NASA Astrophysics Data System (ADS)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  11. General model for boring tool optimization

    NASA Astrophysics Data System (ADS)

    Moraru, G. M.; rbes, M. V. Ze; Popescu, L. G.

    2016-08-01

    Optimizing a tool (and therefore those for boring) consist in improving its performance through maximizing the objective functions chosen by the designer and/or by user. In order to define and to implement the proposed objective functions, contribute numerous features and performance required by tool users. Incorporation of new features makes the cutting tool to be competitive in the market and to meet user requirements.

  12. An aircraft noise pollution model for trajectory optimization

    NASA Technical Reports Server (NTRS)

    Barkana, A.; Cook, G.

    1976-01-01

    A mathematical model describing the generation of aircraft noise is developed with the ultimate purpose of reducing noise (noise-optimizing landing trajectories) in terminal areas. While the model is for a specific aircraft (Boeing 737), the methodology would be applicable to a wide variety of aircraft. The model is used to obtain a footprint on the ground inside of which the noise level is at or above 70 dB.

  13. Fuzzy multiobjective models for optimal operation of a hydropower system

    NASA Astrophysics Data System (ADS)

    Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.

    2013-06-01

    Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.

  14. Optimal schooling formations using a potential flow model

    NASA Astrophysics Data System (ADS)

    Tchieu, Andrew; Gazzola, Mattia; de Brauer, Alexia; Koumoutsakos, Petros

    2012-11-01

    A self-propelled, two-dimensional, potential flow model for agent-based swimmers is used to examine how fluid coupling affects schooling formation. The potential flow model accounts for fluid-mediated interactions between swimmers. The model is extended to include individual agent actions by means of modifying the circulation of each swimmer. A reinforcement algorithm is applied to allow the swimmers to learn how to school in specified lattice formations. Lastly, schooling lattice configurations are optimized by combining reinforcement learning and evolutionary optimization to minimize total control effort and energy expenditure.

  15. Assessment of optimized Markov models in protein fold classification.

    PubMed

    Lampros, Christos; Simos, Thomas; Exarchos, Themis P; Exarchos, Konstantinos P; Papaloukas, Costas; Fotiadis, Dimitrios I

    2014-08-01

    Protein fold classification is a challenging task strongly associated with the determination of proteins' structure. In this work, we tested an optimization strategy on a Markov chain and a recently introduced Hidden Markov Model (HMM) with reduced state-space topology. The proteins with unknown structure were scored against both these models. Then the derived scores were optimized following a local optimization method. The Protein Data Bank (PDB) and the annotation of the Structural Classification of Proteins (SCOP) database were used for the evaluation of the proposed methodology. The results demonstrated that the fold classification accuracy of the optimized HMM was substantially higher compared to that of the Markov chain or the reduced state-space HMM approaches. The proposed methodology achieved an accuracy of 41.4% on fold classification, while Sequence Alignment and Modeling (SAM), which was used for comparison, reached an accuracy of 38%. PMID:25152041

  16. AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT

    NASA Astrophysics Data System (ADS)

    Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi

    In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.

  17. Model Assessment and Optimization Using a Flow Time Transformation

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Marshall, L. A.; McGlynn, B. L.

    2012-12-01

    Hydrologic modeling is a particularly complex problem that is commonly confronted with complications due to multiple dominant streamflow states, temporal switching of streamflow generation mechanisms, and dynamic responses to model inputs based on antecedent conditions. These complexities can inhibit the development of model structures and their fitting to observed data. As a result of these complexities and the heterogeneity that can exist within a catchment, optimization techniques are typically employed to obtain reasonable estimates of model parameters. However, when calibrating a model, the cost function itself plays a large role in determining the "optimal" model parameters. In this study, we introduce a transformation that allows for the estimation of model parameters in the "flow time" domain. The flow time transformation dynamically weights streamflows in the time domain, effectively stretching time during high streamflows and compressing time during low streamflows. Given the impact of cost functions on model optimization, such transformations focus on the hydrologic fluxes themselves rather than on equal time weighting common to traditional approaches. The utility of such a transform is of particular note to applications concerned with total hydrologic flux (water resources management, nutrient loading, etc.). The flow time approach can improve the predictive consistency of total fluxes in hydrologic models and provide insights into model performance by highlighting model strengths and deficiencies in an alternate modeling domain. Flow time transformations can also better remove positive skew from the streamflow time series, resulting in improved model fits, satisfaction of the normality assumption of model residuals, and enhanced uncertainty quantification. We illustrate the value of this transformation for two distinct sets of catchment conditions (snow-dominated and subtropical).

  18. Optimization of murine model for Besnoitia caprae.

    PubMed

    Oryan, A; Sadoughifar, R; Namavari, M

    2016-09-01

    It has been shown that mice, particularly the BALB/c ones, are susceptible to infection by some of the apicomplexan parasites. To compare the susceptibility of the inbred BALB/c, outbred BALB/c and C57 BL/6 to Besnoitia caprae inoculation and to determine LD50, 30 male inbred BALB/c, 30 outbred BALB/c and 30 C57 BL/6 mice were assigned into 18 groups of 5 mice. Each group was inoculated intraperitoneally with 12.5 × 10(3), 25 × 10(3), 5 × 10(4), 1 × 10(5), 2 × 10(5) tachyzoites and a control inoculum of DMEM, respectively. The inbred BALB/c was found the most susceptible strain among the experienced mice strains so the LD50 per inbred BALB/c mouse was calculated as 12.5 × 10(3.6) tachyzoites while the LD50 for the outbred BALB/c and C57 BL/6 was 25 × 10(3.4) and 5 × 10(4) tachyzoites per mouse, respectively. To investigate the impact of different routes of inoculation in the most susceptible mice strain, another seventy five male inbred BALB/c mice were inoculated with 2 × 10(5) tachyzoites of B. caprae via various inoculation routes including: subcutaneous, intramuscular, intraperitoneal, infraorbital and oral. All the mice in the oral and infraorbital groups survived for 60 days, whereas the IM group showed quicker death and more severe pathologic lesions, which was then followed by SC and IP groups. Therefore, BALB/c mouse is a proper laboratory model and IM inoculation is an ideal method in besnoitiosis induction and a candidate in treatment, prevention and testing the efficacy of vaccines for besnoitiosis. PMID:27605770

  19. Methods for accurate homology modeling by global optimization.

    PubMed

    Joo, Keehyoung; Lee, Jinwoo; Lee, Jooyoung

    2012-01-01

    High accuracy protein modeling from its sequence information is an important step toward revealing the sequence-structure-function relationship of proteins and nowadays it becomes increasingly more useful for practical purposes such as in drug discovery and in protein design. We have developed a protocol for protein structure prediction that can generate highly accurate protein models in terms of backbone structure, side-chain orientation, hydrogen bonding, and binding sites of ligands. To obtain accurate protein models, we have combined a powerful global optimization method with traditional homology modeling procedures such as multiple sequence alignment, chain building, and side-chain remodeling. We have built a series of specific score functions for these steps, and optimized them by utilizing conformational space annealing, which is one of the most successful combinatorial optimization algorithms currently available.

  20. Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.

    PubMed

    Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina

    2016-08-25

    The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course. PMID:27150459

  1. Modeling urban air pollution with optimized hierarchical fuzzy inference system.

    PubMed

    Tashayo, Behnam; Alimohammadi, Abbas

    2016-10-01

    Environmental exposure assessments (EEA) and epidemiological studies require urban air pollution models with appropriate spatial and temporal resolutions. Uncertain available data and inflexible models can limit air pollution modeling techniques, particularly in under developing countries. This paper develops a hierarchical fuzzy inference system (HFIS) to model air pollution under different land use, transportation, and meteorological conditions. To improve performance, the system treats the issue as a large-scale and high-dimensional problem and develops the proposed model using a three-step approach. In the first step, a geospatial information system (GIS) and probabilistic methods are used to preprocess the data. In the second step, a hierarchical structure is generated based on the problem. In the third step, the accuracy and complexity of the model are simultaneously optimized with a multiple objective particle swarm optimization (MOPSO) algorithm. We examine the capabilities of the proposed model for predicting daily and annual mean PM2.5 and NO2 and compare the accuracy of the results with representative models from existing literature. The benefits provided by the model features, including probabilistic preprocessing, multi-objective optimization, and hierarchical structure, are precisely evaluated by comparing five different consecutive models in terms of accuracy and complexity criteria. Fivefold cross validation is used to assess the performance of the generated models. The respective average RMSEs and coefficients of determination (R (2)) for the test datasets using proposed model are as follows: daily PM2.5 = (8.13, 0.78), annual mean PM2.5 = (4.96, 0.80), daily NO2 = (5.63, 0.79), and annual mean NO2 = (2.89, 0.83). The obtained results demonstrate that the developed hierarchical fuzzy inference system can be utilized for modeling air pollution in EEA and epidemiological studies.

  2. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design: part II. Model application.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    A new stochastic optimization model under modeling uncertainty (SOMUM) and parameter certainty is applied to a practical site located in western Canada. Various groundwater remediation strategies under different significance levels are obtained from the SOMUM model. The impact of modeling uncertainty (proxy-simulator residuals) on optimal remediation strategies is compared to that of parameter uncertainty (arising from physical properties). The results show that the increased remediation cost for mitigating modeling-uncertainty impact would be higher than those from models where the coefficient of variance of input parameters approximates to 40%. This provides new evidence that the modeling uncertainty in proxy-simulator residuals can hardly be ignored; there is thus a need of investigating and mitigating the impact of such uncertainties on groundwater remediation design. This work would be helpful for lowering the risk of system failure due to potential environmental-standard violation when determining optimal groundwater remediation strategies.

  3. Block-oriented modeling of superstructure optimization problems

    SciTech Connect

    Friedman, Z; Ingalls, J; Siirola, JD; Watson, JP

    2013-10-15

    We present a novel software framework for modeling large-scale engineered systems as mathematical optimization problems. A key motivating feature in such systems is their hierarchical, highly structured topology. Existing mathematical optimization modeling environments do not facilitate the natural expression and manipulation of hierarchically structured systems. Rather, the modeler is forced to "flatten" the system description, hiding structure that may be exploited by solvers, and obfuscating the system that the modeling environment is attempting to represent. To correct this deficiency, we propose a Python-based "block-oriented" modeling approach for representing the discrete components within the system. Our approach is an extension of the Pyomo library for specifying mathematical optimization problems. Through the use of a modeling components library, the block-oriented approach facilitates a clean separation of system superstructure from the details of individual components. This approach also naturally lends itself to expressing design and operational decisions as disjunctive expressions over the component blocks. By expressing a mathematical optimization problem in a block-oriented manner, inherent structure (e.g., multiple scenarios) is preserved for potential exploitation by solvers. In particular, we show that block-structured mathematical optimization problems can be straightforwardly manipulated by decomposition-based multi-scenario algorithmic strategies, specifically in the context of the PySP stochastic programming library. We illustrate our block-oriented modeling approach using a case study drawn from the electricity grid operations domain: unit commitment with transmission switching and N - 1 reliability constraints. Finally, we demonstrate that the overhead associated with block-oriented modeling only minimally increases model instantiation times, and need not adversely impact solver behavior. (C) 2013 Elsevier Ltd. All rights reserved.

  4. Optimization of Parameter Selection for Partial Least Squares Model Development

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Wu, Zhi-Sheng; Zhang, Qiao; Shi, Xin-Yuan; Ma, Qun; Qiao, Yan-Jiang

    2015-07-01

    In multivariate calibration using a spectral dataset, it is difficult to optimize nonsystematic parameters in a quantitative model, i.e., spectral pretreatment, latent factors and variable selection. In this study, we describe a novel and systematic approach that uses a processing trajectory to select three parameters including different spectral pretreatments, variable importance in the projection (VIP) for variable selection and latent factors in the Partial Least-Square (PLS) model. The root mean square errors of calibration (RMSEC), the root mean square errors of prediction (RMSEP), the ratio of standard error of prediction to standard deviation (RPD), and the determination coefficient of calibration (Rcal2) and validation (Rpre2) were simultaneously assessed to optimize the best modeling path. We used three different near-infrared (NIR) datasets, which illustrated that there was more than one modeling path to ensure good modeling. The PLS model optimizes modeling parameters step-by-step, but the robust model described here demonstrates better efficiency than other published papers.

  5. Optimization models for flight test scheduling

    NASA Astrophysics Data System (ADS)

    Holian, Derreck

    As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated

  6. Optimal control of information epidemics modeled as Maki Thompson rumors

    NASA Astrophysics Data System (ADS)

    Kandhway, Kundan; Kuri, Joy

    2014-12-01

    We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.

  7. Hydro- abrasive jet machining modeling for computer control and optimization

    NASA Astrophysics Data System (ADS)

    Groppetti, R.; Jovane, F.

    1993-06-01

    Use of hydro-abrasive jet machining (HAJM) for machining a wide variety of materials—metals, poly-mers, ceramics, fiber-reinforced composites, metal-matrix composites, and bonded or hybridized mate-rials—primarily for two- and three-dimensional cutting and also for drilling, turning, milling, and deburring, has been reported. However, the potential of this innovative process has not been explored fully. This article discusses process control, integration, and optimization of HAJM to establish a plat-form for the implementation of real-time adaptive control constraint (ACC), adaptive control optimiza-tion (ACO), and CAD/CAM integration. It presents the approach followed and the main results obtained during the development, implementation, automation, and integration of a HAJM cell and its computer-ized controller. After a critical analysis of the process variables and models reported in the literature to identify process variables and to define a process model suitable for HAJM real-time control and optimi-zation, to correlate process variables and parameters with machining results, and to avoid expensive and time-consuming experiments for determination of the optimal machining conditions, a process predic-tion and optimization model was identified and implemented. Then, the configuration of the HAJM cell, architecture, and multiprogramming operation of the controller in terms of monitoring, control, process result prediction, and process condition optimization were analyzed. This prediction and optimization model for selection of optimal machining conditions using multi-objective programming was analyzed. Based on the definition of an economy function and a productivity function, with suitable constraints relevant to required machining quality, required kerfing depth, and available resources, the model was applied to test cases based on experimental results.

  8. Concurrent subspace width optimization method for RBF neural network modeling.

    PubMed

    Yao, Wen; Chen, Xiaoqian; Zhao, Yong; van Tooren, Michel

    2012-02-01

    Radial basis function neural networks (RBFNNs) are widely used in nonlinear function approximation. One of the challenges in RBFNN modeling is determining how to effectively optimize width parameters to improve approximation accuracy. To solve this problem, a width optimization method, concurrent subspace width optimization (CSWO), is proposed based on a decomposition and coordination strategy. This method decomposes the large-scale width optimization problem into several subspace optimization (SSO) problems, each of which has a single optimization variable and smaller training and validation data sets so as to greatly simplify optimization complexity. These SSOs can be solved concurrently, thus computational time can be effectively reduced. With top-level system coordination, the optimization of SSOs can converge to a consistent optimum, which is equivalent to the optimum of the original width optimization problem. The proposed method is tested with four mathematical examples and one practical engineering approximation problem. The results demonstrate the efficiency and robustness of CSWO in optimizing width parameters over the traditional width optimization methods.

  9. Optimization of spectral printer modeling based on a modified cellular Yule-Nielsen spectral Neugebauer model.

    PubMed

    Liu, Qiang; Wan, Xiaoxia; Xie, Dehong

    2014-06-01

    The study presented here optimizes several steps in the spectral printer modeling workflow based on a cellular Yule-Nielsen spectral Neugebauer (CYNSN) model. First, a printer subdividing method was developed that reduces the number of sub-models while maintaining the maximum device gamut. Second, the forward spectral prediction accuracy of the CYNSN model for each subspace of the printer was improved using back propagation artificial neural network (BPANN) estimated n values. Third, a sequential gamut judging method, which clearly reduced the complexity of the optimal sub-model and cell searching process during printer backward modeling, was proposed. After that, we further modified the use of the modeling color metric and comprehensively improved the spectral and perceptual accuracy of the spectral printer model. The experimental results show that the proposed optimization approaches provide obvious improvements in aspects of the modeling accuracy or efficiency for each of the corresponding steps, and an overall improvement of the optimized spectral printer modeling workflow was also demonstrated.

  10. Optimization of Time-Course Experiments for Kinetic Model Discrimination

    PubMed Central

    Lages, Nuno F.; Cordeiro, Carlos; Sousa Silva, Marta; Ponces Freire, Ana; Ferreira, António E. N.

    2012-01-01

    Systems biology relies heavily on the construction of quantitative models of biochemical networks. These models must have predictive power to help unveiling the underlying molecular mechanisms of cellular physiology, but it is also paramount that they are consistent with the data resulting from key experiments. Often, it is possible to find several models that describe the data equally well, but provide significantly different quantitative predictions regarding particular variables of the network. In those cases, one is faced with a problem of model discrimination, the procedure of rejecting inappropriate models from a set of candidates in order to elect one as the best model to use for prediction. In this work, a method is proposed to optimize the design of enzyme kinetic assays with the goal of selecting a model among a set of candidates. We focus on models with systems of ordinary differential equations as the underlying mathematical description. The method provides a design where an extension of the Kullback-Leibler distance, computed over the time courses predicted by the models, is maximized. Given the asymmetric nature this measure, a generalized differential evolution algorithm for multi-objective optimization problems was used. The kinetics of yeast glyoxalase I (EC 4.4.1.5) was chosen as a difficult test case to evaluate the method. Although a single-substrate kinetic model is usually considered, a two-substrate mechanism has also been proposed for this enzyme. We designed an experiment capable of discriminating between the two models by optimizing the initial substrate concentrations of glyoxalase I, in the presence of the subsequent pathway enzyme, glyoxalase II (EC 3.1.2.6). This discriminatory experiment was conducted in the laboratory and the results indicate a two-substrate mechanism for the kinetics of yeast glyoxalase I. PMID:22403703

  11. The effect of model uncertainty on some optimal routing problems

    NASA Technical Reports Server (NTRS)

    Mohanty, Bibhu; Cassandras, Christos G.

    1991-01-01

    The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.

  12. A dynamic optimization model for solid waste recycling.

    PubMed

    Anghinolfi, Davide; Paolucci, Massimo; Robba, Michela; Taramasso, Angela Celeste

    2013-02-01

    Recycling is an important part of waste management (that includes different kinds of issues: environmental, technological, economic, legislative, social, etc.). Differently from many works in literature, this paper is focused on recycling management and on the dynamic optimization of materials collection. The developed dynamic decision model is characterized by state variables, corresponding to the quantity of waste in each bin per each day, and control variables determining the quantity of material that is collected in the area each day and the routes for collecting vehicles. The objective function minimizes the sum of costs minus benefits. The developed decision model is integrated in a GIS-based Decision Support System (DSS). A case study related to the Cogoleto municipality is presented to show the effectiveness of the proposed model. From optimal results, it has been found that the net benefits of the optimized collection are about 2.5 times greater than the estimated current policy.

  13. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Bader, Jon B.

    2010-01-01

    Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.

  14. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  15. An uncertain multidisciplinary design optimization method using interval convex models

    NASA Astrophysics Data System (ADS)

    Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong

    2013-06-01

    This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.

  16. Multi-objective parameter optimization of common land model using adaptive surrogate modeling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2015-05-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20 to 100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) Regional LSMs are expensive to run, while conventional multi-objective optimization methods need a large number of model runs (typically ~105-106). It makes parameter optimization computationally prohibitive. An uncertainty quantification framework was developed to meet the aforementioned challenges, which include the following steps: (1) using parameter screening to reduce the number of adjustable parameters, (2) using surrogate models to emulate the responses of dynamic models to the variation of adjustable parameters, (3) using an adaptive strategy to improve the efficiency of surrogate modeling-based optimization; (4) using a weighting function to transfer multi-objective optimization to single-objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column application of a LSM - the Common Land Model (CoLM), and evaluate the effectiveness and efficiency of the proposed framework. The result indicate that this framework can efficiently achieve optimal parameters in a more effective way. Moreover, this result implies the possibility of calibrating other large complex dynamic models, such as regional-scale LSMs, atmospheric models and climate models.

  17. Optimal control design that accounts for model mismatch errors

    SciTech Connect

    Kim, T.J.; Hull, D.G.

    1995-02-01

    A new technique is presented in this paper that reduces the complexity of state differential equations while accounting for modeling assumptions. The mismatch controls are defined as the differences between the model equations and the true state equations. The performance index of the optimal control problem is formulated with a set of tuning parameters that are user-selected to tune the control solution in order to achieve the best results. Computer simulations demonstrate that the tuned control law outperforms the untuned controller and produces results that are comparable to a numerically-determined, piecewise-linear optimal controller.

  18. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  19. Shell model of optimal passive-scalar mixing

    NASA Astrophysics Data System (ADS)

    Miles, Christopher; Doering, Charles

    2015-11-01

    Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.

  20. Applied topology optimization of vibro-acoustic hearing instrument models

    NASA Astrophysics Data System (ADS)

    Søndergaard, Morten Birkmose; Pedersen, Claus B. W.

    2014-02-01

    Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.

  1. The population model of bone remodelling employed the optimal control.

    PubMed

    Moroz, Adam

    2012-11-01

    Several models have been developed in recent years which apply population dynamics methods to describe the mechanisms of bone remodelling. This study incorporates the population kinetics model of bone turnover (including the osteocyte loop regulation) with the optimal control technique. Model simulations have been performed with a wide range of rate parameters using the Monte Carlo method. The regression method has also been used to investigate the interdependence of the location of equilibrium and the characteristics of the equilibrium/relaxation time on the rate parameters employed. The dynamic optimal control outlook for the regulation of bone remodelling processes, in the context of the osteocyte-control population model, has been discussed. Optimisation criteria have been formulated from the perspective of the energetic and metabolic losses in the tissue, with respect to the performance of the bone multicellular unit.

  2. Optimization Method for Solution Model of Laser Tracker Multilateration Measurement

    NASA Astrophysics Data System (ADS)

    Chen, Hongfang; Tan, Zhi; Shi, Zhaoyao; Song, Huixu; Yan, Hao

    2016-08-01

    Multilateration measurement using laser trackers suffers from a cumbersome solution method for high-precision measurements. Errors are induced by the self-calibration routines of the laser tracker software. This paper describes an optimization solution model for laser tracker multilateration measurement, which effectively inhibits the negative effect of this self-calibration, and further, analyzes the accuracy of the singular value decomposition for the described solution model. Experimental verification for the solution model based on laser tracker and coordinate measuring machine (CMM) was performed. The experiment results show that the described optimization model for laser tracker multilateration measurement has good accuracy control, and has potentially broad application in the field of laser tracker spatial localization.

  3. Vehicle Propulsion Systems: Introduction to Modeling and Optimization

    NASA Astrophysics Data System (ADS)

    Guzzella, Lino; Sciarretta, Antonio

    In this book the longitudinal behavior of road vehicles is analyzed. The main emphasis is on the analysis and minimization of the fuel and energy consumption. Most approaches to this problem enhance the complexity of the vehicle system by adding components such as electrical motors or storage devices. Such a complex system can only be designed by means of mathematical models. This text gives an introduction to the modeling and optimization problems typically encountered when designing new propulsion systems for passenger cars.

  4. Optimal experiment design for model selection in biochemical networks

    PubMed Central

    2014-01-01

    Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498

  5. Time dependent optimal switching controls in online selling models

    SciTech Connect

    Bradonjic, Milan; Cohen, Albert

    2010-01-01

    We present a method to incorporate dishonesty in online selling via a stochastic optimal control problem. In our framework, the seller wishes to maximize her average wealth level W at a fixed time T of her choosing. The corresponding Hamilton-Jacobi-Bellmann (HJB) equation is analyzed for a basic case. For more general models, the admissible control set is restricted to a jump process that switches between extreme values. We propose a new approach, where the optimal control problem is reduced to a multivariable optimization problem.

  6. Pumping Optimization Model for Pump and Treat Systems - 15091

    SciTech Connect

    Baker, S.; Ivarson, Kristine A.; Karanovic, M.; Miller, Charles W.; Tonkin, M.

    2015-01-15

    Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It produces predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.

  7. Aeroelastic Optimization Study Based on X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley; Pak, Chan-Gi

    2014-01-01

    A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.

  8. GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS

    SciTech Connect

    Rogers, Adam; Fiege, Jason D.

    2011-02-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.

  9. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  10. Geometry Modeling and Grid Generation for Design and Optimization

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1998-01-01

    Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.

  11. An internet graph model based on trade-off optimization

    NASA Astrophysics Data System (ADS)

    Alvarez-Hamelin, J. I.; Schabanel, N.

    2004-03-01

    This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.

  12. Verifying and Validating Proposed Models for FSW Process Optimization

    NASA Technical Reports Server (NTRS)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  13. Fuzzy modelling of power system optimal load flow

    SciTech Connect

    Miranda, V.; Saraiva, J.T. )

    1992-05-01

    In this paper, a fuzzy model for power system operation is presented. Uncertainties in loads and generations are modeled as fuzzy numbers. System behavior under known (while uncertain) injections is dealt with by a DC fuzzy power flow model. System optimal (while uncertain) operation is calculated with linear programming procedures where the problem nature and structure allows some efficient techniques such as Dantzig Wolfe decomposition and dual simplex to be used. Among the results, one obtains a fuzzy cost value for system operation and possibility distributions for branch power flows and power generations. Some risk analysis is possible, as system robustness and exposure indices can be derived and hedging policies can be investigated.

  14. Discover for Yourself: An Optimal Control Model in Insect Colonies

    ERIC Educational Resources Information Center

    Winkel, Brian

    2013-01-01

    We describe the enlightening path of self-discovery afforded to the teacher of undergraduate mathematics. This is demonstrated as we find and develop background material on an application of optimal control theory to model the evolutionary strategy of an insect colony to produce the maximum number of queen or reproducer insects in the colony at…

  15. Metabolic engineering with multi-objective optimization of kinetic models.

    PubMed

    Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Balsa-Canto, Eva; Banga, Julio R

    2016-03-20

    Kinetic models have a great potential for metabolic engineering applications. They can be used for testing which genetic and regulatory modifications can increase the production of metabolites of interest, while simultaneously monitoring other key functions of the host organism. This work presents a methodology for increasing productivity in biotechnological processes exploiting dynamic models. It uses multi-objective dynamic optimization to identify the combination of targets (enzymatic modifications) and the degree of up- or down-regulation that must be performed in order to optimize a set of pre-defined performance metrics subject to process constraints. The capabilities of the approach are demonstrated on a realistic and computationally challenging application: a large-scale metabolic model of Chinese Hamster Ovary cells (CHO), which are used for antibody production in a fed-batch process. The proposed methodology manages to provide a sustained and robust growth in CHO cells, increasing productivity while simultaneously increasing biomass production, product titer, and keeping the concentrations of lactate and ammonia at low values. The approach presented here can be used for optimizing metabolic models by finding the best combination of targets and their optimal level of up/down-regulation. Furthermore, it can accommodate additional trade-offs and constraints with great flexibility. PMID:26826510

  16. Water-resources optimization model for Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, T.

    1998-01-01

    A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.

  17. Analytical models integrated with satellite images for optimized pest management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The global field protection (GFP) was developed to protect and optimize pest management resources integrating satellite images for precise field demarcation with physical models of controlled release devices of pesticides to protect large fields. The GFP was implemented using a graphical user interf...

  18. Review of Optimization Methods in Groundwater Modeling and Management

    NASA Astrophysics Data System (ADS)

    Yeh, W. W.

    2001-12-01

    This paper surveys nonlinear optimization methods developed for groundwater modeling and management. The first part reviews algorithms used for model calibration, that is, the inverse problem of parameter estimation. In recent years, groundwater models are combined with optimization models to identify the best management alternatives. Once the objectives and constraints are specified, most problems lend themselves to solution techniques developed in operations research, optimal control, and combinatorial optimization. The second part reviews methods developed for groundwater management. Algorithms and methods reviewed include quadratic programming, differential dynamic programming, nonlinear programming, mixed integer programming, stochastic programming, and non-gradient-based search algorithms. Advantages and drawbacks associated with each approach are discussed. A recent tendency has been toward combining the gradient-based algorithms with the non-gradient-based search algorithms, in that, a non-gradient-based search algorithm is used to identify a near optimum solution and a gradient-based algorithm uses the near optimum solution as its initial estimate for rapid convergence.

  19. Metabolic engineering with multi-objective optimization of kinetic models.

    PubMed

    Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Balsa-Canto, Eva; Banga, Julio R

    2016-03-20

    Kinetic models have a great potential for metabolic engineering applications. They can be used for testing which genetic and regulatory modifications can increase the production of metabolites of interest, while simultaneously monitoring other key functions of the host organism. This work presents a methodology for increasing productivity in biotechnological processes exploiting dynamic models. It uses multi-objective dynamic optimization to identify the combination of targets (enzymatic modifications) and the degree of up- or down-regulation that must be performed in order to optimize a set of pre-defined performance metrics subject to process constraints. The capabilities of the approach are demonstrated on a realistic and computationally challenging application: a large-scale metabolic model of Chinese Hamster Ovary cells (CHO), which are used for antibody production in a fed-batch process. The proposed methodology manages to provide a sustained and robust growth in CHO cells, increasing productivity while simultaneously increasing biomass production, product titer, and keeping the concentrations of lactate and ammonia at low values. The approach presented here can be used for optimizing metabolic models by finding the best combination of targets and their optimal level of up/down-regulation. Furthermore, it can accommodate additional trade-offs and constraints with great flexibility.

  20. To the optimization problem in minority game model

    SciTech Connect

    Yanishevsky, Vasyl

    2009-12-14

    The article presents the research results of the optimization problem in minority game model to a gaussian approximation using replica symmetry breaking by one step (1RSB). A comparison to replica symmetry approximation (RS) and the results from literary sources received using other methods has been held.

  1. A method of estimating optimal catchment model parameters

    NASA Astrophysics Data System (ADS)

    Ibrahim, Yaacob; Liong, Shie-Yui

    1993-09-01

    A review of a calibration method developed earlier (Ibrahim and Liong, 1992) is presented. The method generates optimal values for single events. It entails randomizing the calibration parameters over bounds such that a system response under consideration is bounded. Within the bounds, which are narrow and generated automatically, explicit response surface representation of the response is obtained using experimental design techniques and regression analysis. The optimal values are obtained by searching on the response surface for a point at which the predicted response is equal to the measured response and the value of the joint probability density function at that point in a transformed space is the highest. The method is demonstrated on a catchment in Singapore. The issue of global optimal values is addressed by applying the method on wider bounds. The results indicate that the optimal values arising from the narrow set of bounds are, indeed, global. Improvements which are designed to achieve comparably accurate estimates but with less expense are introduced. A linear response surface model is used. Two approximations of the model are studied. The first is to fit the model using data points generated from simple Monte Carlo simulation; the second is to approximate the model by a Taylor series expansion. Very good results are obtained from both approximations. Two methods of obtaining a single estimate from the individual event's estimates of the parameters are presented. The simulated and measured hydrographs of four verification storms using these estimates compare quite well.

  2. Optimal Control of a Dengue Epidemic Model with Vaccination

    NASA Astrophysics Data System (ADS)

    Rodrigues, Helena Sofia; Teresa, M.; Monteiro, T.; Torres, Delfim F. M.

    2011-09-01

    We present a SIR+ASI epidemic model to describe the interaction between human and dengue fever mosquito populations. A control strategy in the form of vaccination, to decrease the number of infected individuals, is used. An optimal control approach is applied in order to find the best way to fight the disease.

  3. Effective and efficient algorithm for multiobjective optimization of hydrologic models

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Gupta, Hoshin V.; Bastidas, Luis A.; Bouten, Willem; Sorooshian, Soroosh

    2003-08-01

    Practical experience with the calibration of hydrologic models suggests that any single-objective function, no matter how carefully chosen, is often inadequate to properly measure all of the characteristics of the observed data deemed to be important. One strategy to circumvent this problem is to define several optimization criteria (objective functions) that measure different (complementary) aspects of the system behavior and to use multicriteria optimization to identify the set of nondominated, efficient, or Pareto optimal solutions. In this paper, we present an efficient and effective Markov Chain Monte Carlo sampler, entitled the Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm, which is capable of solving the multiobjective optimization problem for hydrologic models. MOSCEM is an improvement over the Shuffled Complex Evolution Metropolis (SCEM-UA) global optimization algorithm, using the concept of Pareto dominance (rather than direct single-objective function evaluation) to evolve the initial population of points toward a set of solutions stemming from a stable distribution (Pareto set). The efficacy of the MOSCEM-UA algorithm is compared with the original MOCOM-UA algorithm for three hydrologic modeling case studies of increasing complexity.

  4. Hyperopt: a Python library for model selection and hyperparameter optimization

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  5. Electrochemical model based charge optimization for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Pramanik, Sourav; Anwar, Sohel

    2016-05-01

    In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.

  6. Metabolic modeling of Saccharomyces cerevisiae using the optimal control of homeostasis: a cybernetic model definition.

    PubMed

    Giuseppin, M L; van Riel, N A

    2000-01-01

    A model is presented to describe the observed behavior of microorganisms that aim at metabolic homeostasis while growing and adapting to their environment in an optimal way. The cellular metabolism is seen as a network with a multiple controller system with both feedback and feedforward control, i.e., a model based on a dynamic optimal metabolic control. The dynamic network consists of aggregated pathways, each having a control setpoint for the metabolic states at a given growth rate. This set of strategies of the cell forms a true cybernetic model with a minimal number of assumptions. The cellular strategies and constraints were derived from metabolic flux analysis using an identified, biochemically relevant, stoichiometry matrix derived from experimental data on the cellular composition of continuous cultures of Saccharomyces cerevisiae. Based on these data a cybernetic model was developed to study its dynamic behavior. The growth rate of the cell is determined by the structural compounds and fluxes of compounds related to central metabolism. In contrast to many other cybernetic models, the minimal model does not consist of any assumed internal kinetic parameters or interactions. This necessitates the use of a stepwise integration with an optimization of the fluxes at every time interval. Some examples of the behavior of this model are given with respect to steady states and pulse responses. This model is very suitable for describing semiquantitatively dynamics of global cellular metabolism and may form a useful framework for including structured and more detailed kinetic models. PMID:10935932

  7. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    PubMed Central

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957

  8. Optimal thermalization in a shell model of homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Thalabard, Simon; Turkington, Bruce

    2016-04-01

    We investigate the turbulence-induced dissipation of the large scales in a statistically homogeneous flow using an ‘optimal closure,’ which one of us (BT) has recently exposed in the context of Hamiltonian dynamics. This statistical closure employs a Gaussian model for the turbulent scales, with corresponding vanishing third cumulant, and yet it captures an intrinsic damping. The key to this apparent paradox lies in a clear distinction between true ensemble averages and their proxies, most easily grasped when one works directly with the Liouville equation rather than the cumulant hierarchy. We focus on a simple problem for which the optimal closure can be fully and exactly worked out: the relaxation arbitrarily far-from-equilibrium of a single energy shell towards Gibbs equilibrium in an inviscid shell model of 3D turbulence. The predictions of the optimal closure are validated against DNS and contrasted with those derived from EDQNM closure.

  9. Modeling of Biological Intelligence for SCM System Optimization

    PubMed Central

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724

  10. Modeling of biological intelligence for SCM system optimization.

    PubMed

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  11. Asymmetric optimal-velocity car-following model

    NASA Astrophysics Data System (ADS)

    Xu, Xihua; Pang, John; Monterola, Christopher

    2015-10-01

    Taking the asymmetric characteristic of the velocity differences of vehicles into account, we present an asymmetric optimal velocity model for a car-following theory. The asymmetry between the acceleration and the deceleration is represented by the exponential function with an asymmetrical factor, which agrees with the published experiment. This model avoids the disadvantage of the unrealistically high acceleration appearing in previous models when the velocity difference becomes large. This model is simple and only has two independent parameters. The linear stability condition is derived and the phase transition of the traffic flow appears beyond the critical density. The strength of interaction between clusters is shown to increase with the asymmetry factor in our model.

  12. Optimized volume models of earthquake-triggered landslides.

    PubMed

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-01-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212

  13. Optimized volume models of earthquake-triggered landslides.

    PubMed

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-07-12

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.

  14. Optimized volume models of earthquake-triggered landslides

    NASA Astrophysics Data System (ADS)

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-07-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.

  15. Optimized volume models of earthquake-triggered landslides

    PubMed Central

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-01-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212

  16. Optimal control in a model of malaria with differential susceptibility

    NASA Astrophysics Data System (ADS)

    Hincapié, Doracelly; Ospina, Juan

    2014-06-01

    A malaria model with differential susceptibility is analyzed using the optimal control technique. In the model the human population is classified as susceptible, infected and recovered. Susceptibility is assumed dependent on genetic, physiological, or social characteristics that vary between individuals. The model is described by a system of differential equations that relate the human and vector populations, so that the infection is transmitted to humans by vectors, and the infection is transmitted to vectors by humans. The model considered is analyzed using the optimal control method when the control consists in using of insecticide-treated nets and educational campaigns; and the optimality criterion is to minimize the number of infected humans, while keeping the cost as low as is possible. One first goal is to determine the effects of differential susceptibility in the proposed control mechanism; and the second goal is to determine the algebraic form of the basic reproductive number of the model. All computations are performed using computer algebra, specifically Maple. It is claimed that the analytical results obtained are important for the design and implementation of control measures for malaria. It is suggested some future investigations such as the application of the method to other vector-borne diseases such as dengue or yellow fever; and also it is suggested the possible application of free software of computer algebra like Maxima.

  17. Aeroelastic Optimization Study Based on the X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley W.; Pak, Chan-Gi

    2014-01-01

    One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.

  18. A neural network model of reliably optimized spike transmission.

    PubMed

    Samura, Toshikazu; Ikegaya, Yuji; Sato, Yasuomi D

    2015-06-01

    We studied the detailed structure of a neuronal network model in which the spontaneous spike activity is correctly optimized to match the experimental data and discuss the reliability of the optimized spike transmission. Two stochastic properties of the spontaneous activity were calculated: the spike-count rate and synchrony size. The synchrony size, expected to be an important factor for optimization of spike transmission in the network, represents a percentage of observed coactive neurons within a time bin, whose probability approximately follows a power-law. We systematically investigated how these stochastic properties could matched to those calculated from the experimental data in terms of the log-normally distributed synaptic weights between excitatory and inhibitory neurons and synaptic background activity induced by the input current noise in the network model. To ensure reliably optimized spike transmission, the synchrony size as well as spike-count rate were simultaneously optimized. This required changeably balanced log-normal distributions of synaptic weights between excitatory and inhibitory neurons and appropriately amplified synaptic background activity. Our results suggested that the inhibitory neurons with a hub-like structure driven by intensive feedback from excitatory neurons were a key factor in the simultaneous optimization of the spike-count rate and synchrony size, regardless of different spiking types between excitatory and inhibitory neurons.

  19. Health benefit modelling and optimization of vehicular pollution control strategies

    NASA Astrophysics Data System (ADS)

    Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra

    2012-12-01

    This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific

  20. A model for HIV/AIDS pandemic with optimal control

    NASA Astrophysics Data System (ADS)

    Sule, Amiru; Abdullah, Farah Aini

    2015-05-01

    Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.

  1. Optimal Observation Network Design for Model Discrimination using Information Theory and Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T. C.

    2014-12-01

    Groundwater systems are complex and subject to multiple interpretations and conceptualizations due to a lack of sufficient information. As a result, multiple conceptual models are often developed and their mean predictions are preferably used to avoid biased predictions from using a single conceptual model. Yet considering too many conceptual models may lead to high prediction uncertainty and may lose the purpose of model development. In order to reduce the number of models, an optimal observation network design is proposed based on maximizing the Kullback-Leibler (KL) information to discriminate competing models. The KL discrimination function derived by Box and Hill [1967] for one additional observation datum at a time is expanded to account for multiple independent spatiotemporal observations. The Bayesian model averaging (BMA) method is used to incorporate existing data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. To consider the future observation uncertainty, the Monte Carlo realizations of BMA predicted future observations are used to calculate the mean and variance of posterior model probabilities of the competing models. The goal of the optimal observation network design is to find the number and location of observation wells and sampling rounds such that the highest posterior model probability of a model is larger than a desired probability criterion (e.g., 95%). The optimal observation network design is implemented to a groundwater study in the Baton Rouge area, Louisiana to collect new groundwater heads from USGS wells. The considered sources of uncertainty that create multiple groundwater models are the geological architecture, the boundary condition, and the fault permeability architecture. All possible design solutions are enumerated using high performance computing systems. Results show that total model variance (the sum of within-model variance and between-model

  2. Experimental Verification of Structural-Acoustic Modelling and Design Optimization

    NASA Astrophysics Data System (ADS)

    MARBURG, S.; BEER, H.-J.; GIER, J.; HARDTKE, H.-J.; RENNERT, R.; PERRET, F.

    2002-05-01

    A number of papers have been published on the simulation of structural-acoustic design optimization. However, extensive work is required to verify these results in practical applications. Herein, a steel box of 1·0×1·1×1·5 m with an external beam structure welded on three surface plates was investigated. This investigation included experimental modal analysis and experimental measurements of certain noise transfer functions (sound pressure at points inside the box due to force excitation at beam structure). Using these experimental data, the finite element model of the structure was tuned to provide similar results. With a first structural mode at less than 20 Hz, the reliable frequency range was identified up to about 60 Hz. Obviously, the finite element model could not be further improved only by mesh refinement. The tuning process will be explained in detail since there was a number of changes that helped to improve the structure. Other changes did not improve the structure. Although this model of the box could be expected as a rather simple structure, it can be considered to be a complex structure for simulation purposes. A defined modification of the physical model verified the simulation model. In a final step, the optimal location of stiffening beam structures was predicted by simulation. Their effect on the noise transfer function was experimentally verified. This paper critically discusses modelling techniques that are applied for structural-acoustic simulation of sedan bodies.

  3. Parameter optimization in differential geometry based solvation models

    PubMed Central

    Wang, Bao; Wei, G. W.

    2015-01-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304

  4. Parameter optimization in differential geometry based solvation models.

    PubMed

    Wang, Bao; Wei, G W

    2015-10-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules.

  5. Linear versus quadratic portfolio optimization model with transaction cost

    NASA Astrophysics Data System (ADS)

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  6. The PDB_REDO server for macromolecular structure model optimization.

    PubMed

    Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis

    2014-07-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  7. The PDB_REDO server for macromolecular structure model optimization

    PubMed Central

    Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis

    2014-01-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo­graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  8. Modeling and Optimizing Space Networks for Improved Communication Capacity

    NASA Astrophysics Data System (ADS)

    Spangelo, Sara C.

    There are a growing number of individual and constellation small satellite missions seeking to download large quantities of science, observation, and surveillance data. The existing ground station infrastructure to support these missions constrains the potential data throughput because the stations are low-cost, are not always available because they are independently owned and operated, and their ability to collect data is often inefficient. The constraints of the small satellite form factor (e.g. mass, size, power) coupled with the ground network limitations lead to significant operational and communication scheduling challenges. Faced with these challenges, our goal is to maximize capacity, defined as the amount of data that is successfully downloaded from space to ground communication nodes. In this thesis, we develop models, tools, and optimization algorithms for spacecraft and ground network operations. First, we develop an analytical modeling framework and a high-fidelity simulation environment that capture the interaction of on-board satellite energy and data dynamics, ground stations, and the external space environment. Second, we perform capacity-based assessments to identify excess and deficient resources for comparison to mission-specific requirements. Third, we formulate and solve communication scheduling problems that maximize communication capacity for a satellite downloading to a network of globally and functionally heterogeneous ground stations. Numeric examples demonstrate the applicability of the models and tools to assess and optimize real-world existing and upcoming small satellite mission scenarios that communicate to global ground station networks as well as generic communication scheduling problem instances. We study properties of optimal satellite communication schedules and sensitivity of communication capacity to various deterministic and stochastic satellite vehicle and network parameters. The models, tools, and optimization techniques we

  9. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    NASA Astrophysics Data System (ADS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-07-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.

  10. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  11. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  12. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  13. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  14. Advanced Nuclear Fuel Cycle Transitions: Optimization, Modeling Choices, and Disruptions

    NASA Astrophysics Data System (ADS)

    Carlsen, Robert W.

    Many nuclear fuel cycle simulators have evolved over time to help understan the nuclear industry/ecosystem at a macroscopic level. Cyclus is one of th first fuel cycle simulators to accommodate larger-scale analysis with it liberal open-source licensing and first-class Linux support. Cyclus also ha features that uniquely enable investigating the effects of modeling choices o fuel cycle simulators and scenarios. This work is divided into thre experiments focusing on optimization, effects of modeling choices, and fue cycle uncertainty. Effective optimization techniques are developed for automatically determinin desirable facility deployment schedules with Cyclus. A novel method fo mapping optimization variables to deployment schedules is developed. Thi allows relationships between reactor types and scenario constraints to b represented implicitly in the variable definitions enabling the usage o optimizers lacking constraint support. It also prevents wasting computationa resources evaluating infeasible deployment schedules. Deployed power capacit over time and deployment of non-reactor facilities are also included a optimization variables There are many fuel cycle simulators built with different combinations o modeling choices. Comparing results between them is often difficult. Cyclus flexibility allows comparing effects of many such modeling choices. Reacto refueling cycle synchronization and inter-facility competition among othe effects are compared in four cases each using combinations of fleet of individually modeled reactors with 1-month or 3-month time steps. There are noticeable differences in results for the different cases. The larges differences occur during periods of constrained reactor fuel availability This and similar work can help improve the quality of fuel cycle analysi generally There is significant uncertainty associated deploying new nuclear technologie such as time-frames for technology availability and the cost of buildin advanced reactors

  15. Optimization of arterial age prediction models based in pulse wave

    NASA Astrophysics Data System (ADS)

    Scandurra, A. G.; Meschino, G. J.; Passoni, L. I.; Pra, A. L. Dai; Introzzi, A. R.; Clara, F. M.

    2007-11-01

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff.

  16. An optimization model for long-range transmission expansion planning

    SciTech Connect

    Santos, A. Jr.; Franca, P.M.; Said, A.

    1989-02-01

    In this paper is presented a static network synthesis method applied to transmission expansion planning. The static synthesis problem is formulated as a mixed-integer network flow model that is solved by an implicit enumeration algorithm. This model considers as the objective function the most productive trade off, resulting in low investment costs and good electrical performance. The load and generation nodal equations are considered in the constraints of the model. The power transmission law of DC load flow is implicit in the optimization model. Results of computational tests are presented and they show the advantage of this method compared with a heuristic procedure. The case studies show a comparison of computational times and costs of solutions obtained for the Brazilian North-Northeast transmission system.

  17. A mathematical model on the optimal timing of offspring desertion.

    PubMed

    Seno, Hiromi; Endo, Hiromi

    2007-06-01

    We consider the offspring desertion as the optimal strategy for the deserter parent, analyzing a mathematical model for its expected reproductive success. It is shown that the optimality of the offspring desertion significantly depends on the offsprings' birth timing in the mating season, and on the other ecological parameters characterizing the innate nature of considered animals. Especially, the desertion is less likely to occur for the offsprings born in the later period of mating season. It is also implied that the offspring desertion after a partially biparental care would be observable only with a specific condition.

  18. A mathematical model on the optimal timing of offspring desertion.

    PubMed

    Seno, Hiromi; Endo, Hiromi

    2007-06-01

    We consider the offspring desertion as the optimal strategy for the deserter parent, analyzing a mathematical model for its expected reproductive success. It is shown that the optimality of the offspring desertion significantly depends on the offsprings' birth timing in the mating season, and on the other ecological parameters characterizing the innate nature of considered animals. Especially, the desertion is less likely to occur for the offsprings born in the later period of mating season. It is also implied that the offspring desertion after a partially biparental care would be observable only with a specific condition. PMID:17328918

  19. Neighboring extremal optimal control design including model mismatch errors

    SciTech Connect

    Kim, T.J.; Hull, D.G.

    1994-11-01

    The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.

  20. CPOPT : optimization for fitting CANDECOMP/PARAFAC models.

    SciTech Connect

    Dunlavy, Daniel M.; Kolda, Tamara Gibson; Acar, Evrim

    2008-10-01

    Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.

  1. Roll levelling semi-analytical model for process optimization

    NASA Astrophysics Data System (ADS)

    Silvestre, E.; Garcia, D.; Galdos, L.; Saenz de Argandoña, E.; Mendiguren, J.

    2016-08-01

    Roll levelling is a primary manufacturing process used to remove residual stresses and imperfections of metal strips in order to make them suitable for subsequent forming operations. In the last years the importance of this process has been evidenced with the apparition of Ultra High Strength Steels with strength > 900 MPa. The optimal setting of the machine as well as a robust machine design has become critical for the correct processing of these materials. Finite Element Method (FEM) analysis is the widely used technique for both aspects. However, in this case, the FEM simulation times are above the admissible ones in both machine development and process optimization. In the present work, a semi-analytical model based on a discrete bending theory is presented. This model is able to calculate the critical levelling parameters i.e. force, plastification rate, residual stresses in a few seconds. First the semi-analytical model is presented. Next, some experimental industrial cases are analyzed by both the semi-analytical model and the conventional FEM model. Finally, results and computation times of both methods are compared.

  2. Fabrication, modeling and optimization of an ionic polymer gel actuator

    NASA Astrophysics Data System (ADS)

    Jo, Choonghee; Naguib, Hani E.; Kwon, Roy H.

    2011-04-01

    The modeling of the electro-active behavior of ionic polymer gel is studied and the optimum conditions that maximize the deflection of the gel are investigated. The bending deformation of polymer gel under an electric field is formulated by using chemo-electro-mechanical parameters. In the modeling, swelling and shrinking phenomena due to the differences in ion concentration at the boundary between the gel and solution are considered prior to the application of an electric field, and then bending actuation is applied. As the driving force of swelling, shrinking and bending deformation, differential osmotic pressure at the boundary of the gel and solution is considered. From this behavior, the strain or deflection of the gel is calculated. To find the optimum design parameter settings (electric voltage, thickness of gel, concentration of polyion in the gel, ion concentration in the solution, and degree of cross-linking in the gel) for bending deformation, a nonlinear constrained optimization model is formulated. In the optimization model, a bending deflection equation of the gel is used as an objective function, and a range of decision variables and their relationships are used as constraint equations. Also, actuation experiments are conducted using poly(2-acrylamido-2-methylpropane sulfonic acid) (PAMPS) gel and the optimum conditions predicted by the proposed model have been verified by the experiments.

  3. Optimization of wind farm performance using low-order models

    NASA Astrophysics Data System (ADS)

    Dabiri, John; Brownstein, Ian

    2015-11-01

    A low order model that captures the dominant flow behaviors in a vertical-axis wind turbine (VAWT) array is used to maximize the power output of wind farms utilizing VAWTs. The leaky Rankine body model (LRB) was shown by Araya et al. (JRSE 2014) to predict the ranking of individual turbine performances in an array to within measurement uncertainty as compared to field data collected from full-scale VAWTs. Further, this model is able to predict array performance with significantly less computational expense than higher fidelity numerical simulations of the flow, making it ideal for use in optimization of wind farm performance. This presentation will explore the ability of the LRB model to rank the relative power output of different wind turbine array configurations as well as the ranking of individual array performance over a variety of wind directions, using various complex configurations tested in the field and simpler configurations tested in a wind tunnel. Results will be presented in which the model is used to determine array fitness in an evolutionary algorithm seeking to find optimal array configurations given a number of turbines, area of available land, and site wind direction profile. Comparison with field measurements will be presented.

  4. Discrete-Time ARMAv Model-Based Optimal Sensor Placement

    SciTech Connect

    Song Wei; Dyke, Shirley J.

    2008-07-08

    This paper concentrates on the optimal sensor placement problem in ambient vibration based structural health monitoring. More specifically, the paper examines the covariance of estimated parameters during system identification using auto-regressive and moving average vector (ARMAv) model. By utilizing the discrete-time steady state Kalman filter, this paper realizes the structure's finite element (FE) model under broad-band white noise excitations using an ARMAv model. Based on the asymptotic distribution of the parameter estimates of the ARMAv model, both a theoretical closed form and a numerical estimate form of the covariance of the estimates are obtained. Introducing the information entropy (differential entropy) measure, as well as various matrix norms, this paper attempts to find a reasonable measure to the uncertainties embedded in the ARMAv model estimates. Thus, it is possible to select the optimal sensor placement that would lead to the smallest uncertainties during the ARMAv identification process. Two numerical examples are provided to demonstrate the methodology and compare the sensor placement results upon various measures.

  5. Rapid Modeling, Assembly and Simulation in Design Optimization

    NASA Technical Reports Server (NTRS)

    Housner, Jerry

    1997-01-01

    A new capability for design is reviewed. This capability provides for rapid assembly of detail finite element models early in the design process where costs are most effectively impacted. This creates an engineering environment which enables comprehensive analysis and design optimization early in the design process. Graphical interactive computing makes it possible for the engineer to interact with the design while performing comprehensive design studies. This rapid assembly capability is enabled by the use of Interface Technology, to couple independently created models which can be archived and made accessible to the designer. Results are presented to demonstrate the capability.

  6. Utilization-Based Modeling and Optimization for Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Liu, Yanbing; Huang, Jun; Liu, Zhangxiong

    The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.

  7. Optimality of partial adiabatic search and its circuit model

    NASA Astrophysics Data System (ADS)

    Mei, Ying; Sun, Jie; Lu, Songfeng; Gao, Chao

    2014-08-01

    In this paper, we first uncover a fact that a partial adiabatic quantum search with time complexity is in fact optimal, in which is the total number of elements in an unstructured database, and () of them are the marked ones(one) . We then discuss how to implement a partial adiabatic search algorithm on the quantum circuit model. From the implementing procedure on the circuit model, we can find out that the approximating steps needed are always in the same order of the time complexity of the adiabatic algorithm.

  8. Modelling and Optimization of the Half Model of a Passenger Car with Magnetorheological Suspension System

    NASA Astrophysics Data System (ADS)

    Segla, S.

    The paper deals with modelling and optimization of the half model of a passenger car with an ideal semi-active suspension, semi-active suspension equipped with magnetorheological dampers, passive suspension equipped with hydraulic dampers without control and compares their dynamic characteristics. The conventional skyhook control is used to control semi-active dampers taking into account the time delay. Selected parameters of the suspension systems are optimized for given road profiles using genetic algorithms. The results show that implementation of the magnetorheological dampers can lead to a significant improvement of the ride comfort and handling properties of passenger cars provided that the time delay is low enough.

  9. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Bader, Jon B.

    2009-01-01

    Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.

  10. Optimizing the lithography model calibration algorithms for NTD process

    NASA Astrophysics Data System (ADS)

    Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.

    2016-03-01

    As patterns shrink to the resolution limits of up-to-date ArF immersion lithography technology, negative tone development (NTD) process has been an increasingly adopted technique to get superior imaging quality through employing bright-field (BF) masks to print the critical dark-field (DF) metal and contact layers. However, from the fundamental materials and process interaction perspectives, several key differences inherently exist between NTD process and the traditional positive tone development (PTD) system, especially the horizontal/vertical resist shrinkage and developer depletion effects, hence the traditional resist parameters developed for the typical PTD process have no longer fit well in NTD process modeling. In order to cope with the inherent differences between PTD and NTD processes accordingly get improvement on NTD modeling accuracy, several NTD models with different combinations of complementary terms were built to account for the NTD-specific resist shrinkage, developer depletion and diffusion, and wafer CD jump induced by sub threshold assistance feature (SRAF) effects. Each new complementary NTD term has its definite aim to deal with the NTD-specific phenomena. In this study, the modeling accuracy is compared among different models for the specific patterning characteristics on various feature types. Multiple complementary NTD terms were finally proposed to address all the NTD-specific behaviors simultaneously and further optimize the NTD modeling accuracy. The new algorithm of multiple complementary NTD term tested on our critical dark-field layers demonstrates consistent model accuracy improvement for both calibration and verification.

  11. The practical side of solute transport modelling for optimized remediation

    NASA Astrophysics Data System (ADS)

    Paster, Amir

    2015-04-01

    "Pump and Treat" (P&T) is a debated, yet common, practice for removing a (large) contaminant plume and treating it ex-situ. An optimal design of P&T usually involves a model for the fate and transport of contaminants in the aquifer. Different pumping setups are considered, and removal rates are calculated. The flow model is typically based on the available set of geological data, which is usually rather limited, and on data measured in wells, including well tests and historical measurements of head. The transport model, in turn, is typically based on an extremely limited number of concentration measurements and on various rough assumptions regarding the sources and sinks of the contaminant. Thus, the resulting model is suffering of large inaccuracies, and decision making based on such model is rather limited. In addition, such models usually use rather large numerical cells, and (accordingly) rather large value of longitudinal dispersivity (alpha_L). The calibration of this parameter is typically based on concentration data obtained after the discovery of the contaminant. It is common that when the contamination is discovered, production wells are shut down and the flow in the area of the plume becomes a regional one. Thus, it is reasonable to hypothesize that the prediction of transport close to the P&T wells may result in exaggerated mixing of the plume at this zone of radially converging flow. An example to such model, focused on a Perchlorate spill in the coastal aquifer of Israel, is discussed.

  12. Optimal symmetric flight with an intermediate vehicle model

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.

    1983-01-01

    Optimal flight in the vertical plane with a vehicle model intermediate in complexity between the point-mass and energy models is studied. Flight-path angle takes on the role of a control variable. Range-open problems feature subarcs of vertical flight and singular subarcs. The class of altitude-speed-range-time optimization problems with fuel expenditure unspecified is investigated and some interesting phenomena uncovered. The maximum-lift-to-drag glide appears as part of the family, final-time-open, with appropriate initial and terminal transient exceeding level-flight drag, some members exhibiting oscillations. Oscillatory paths generally fail the Jacobi test for durations exceeding a period and furnish a minimum only for short-duration problems.

  13. Modeling Microinverters and DC Power Optimizers in PVWatts

    SciTech Connect

    MacAlpine, S.; Deline, C.

    2015-02-01

    Module-level distributed power electronics including microinverters and DC power optimizers are increasingly popular in residential and commercial PV systems. Consumers are realizing their potential to increase design flexibility, monitor system performance, and improve energy capture. It is becoming increasingly important to accurately model PV systems employing these devices. This document summarizes existing published documents to provide uniform, impartial recommendations for how the performance of distributed power electronics can be reflected in NREL's PVWatts calculator (http://pvwatts.nrel.gov/).

  14. Software tool for the prosthetic foot modeling and stiffness optimization.

    PubMed

    Strbac, Matija; Popović, Dejan B

    2012-01-01

    We present the procedure for the optimization of the stiffness of the prosthetic foot. The procedure allows the selection of the elements of the foot and the materials used for the design. The procedure is based on the optimization where the cost function is the minimization of the difference between the knee joint torques of healthy walking and the walking with the transfemural prosthesis. We present a simulation environment that allows the user to interactively vary the foot geometry and track the changes in the knee torque that arise from these adjustments. The software allows the estimation of the optimal prosthetic foot elasticity and geometry. We show that altering model attributes such as the length of the elastic foot segment or its elasticity leads to significant changes in the estimated knee torque required for a given trajectory.

  15. Mathematical model of the metal mould surface temperature optimization

    NASA Astrophysics Data System (ADS)

    Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek

    2015-11-01

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  16. Optimization model of vaccination strategy for dengue transmission

    NASA Astrophysics Data System (ADS)

    Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.

    2014-02-01

    Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.

  17. Software Tool for the Prosthetic Foot Modeling and Stiffness Optimization

    PubMed Central

    Štrbac, Matija; Popović, Dejan B.

    2012-01-01

    We present the procedure for the optimization of the stiffness of the prosthetic foot. The procedure allows the selection of the elements of the foot and the materials used for the design. The procedure is based on the optimization where the cost function is the minimization of the difference between the knee joint torques of healthy walking and the walking with the transfemural prosthesis. We present a simulation environment that allows the user to interactively vary the foot geometry and track the changes in the knee torque that arise from these adjustments. The software allows the estimation of the optimal prosthetic foot elasticity and geometry. We show that altering model attributes such as the length of the elastic foot segment or its elasticity leads to significant changes in the estimated knee torque required for a given trajectory. PMID:22536296

  18. Optimization in generalized linear models: A case study

    NASA Astrophysics Data System (ADS)

    Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina

    2016-06-01

    The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.

  19. Mathematical model of the metal mould surface temperature optimization

    SciTech Connect

    Mlynek, Jaroslav Knobloch, Roman; Srb, Radek

    2015-11-30

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  20. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    NASA Astrophysics Data System (ADS)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  1. Model-based optimization of tapered free-electron lasers

    NASA Astrophysics Data System (ADS)

    Mak, Alan; Curbis, Francesca; Werin, Sverker

    2015-04-01

    The energy extraction efficiency is a figure of merit for a free-electron laser (FEL). It can be enhanced by the technique of undulator tapering, which enables the sustained growth of radiation power beyond the initial saturation point. In the development of a single-pass x-ray FEL, it is important to exploit the full potential of this technique and optimize the taper profile aw(z ). Our approach to the optimization is based on the theoretical model by Kroll, Morton, and Rosenbluth, whereby the taper profile aw(z ) is not a predetermined function (such as linear or exponential) but is determined by the physics of a resonant particle. For further enhancement of the energy extraction efficiency, we propose a modification to the model, which involves manipulations of the resonant particle's phase. Using the numerical simulation code GENESIS, we apply our model-based optimization methods to a case of the future FEL at the MAX IV Laboratory (Lund, Sweden), as well as a case of the LCLS-II facility (Stanford, USA).

  2. A model of optimal dosing of antibiotic treatment in biofilm.

    PubMed

    Imran, Mudassar; Smith, Hal L

    2014-06-01

    Biofilms are heterogeneous matrix enclosed micro-colonies of bacteria mostly found on moist surfaces. Biofilm formation is the primary cause of several persistent infections found in humans. We derive a mathematical model of biofilm and surrounding fluid dynamics to investigate the effect of a periodic dose of antibiotic on elimination of microbial population from biofilm. The growth rate of bacteria in biofilm is taken as Monod type for the limiting nutrient. The pharmacodynamics function is taken to be dependent both on limiting nutrient and antibiotic concentration. Assuming that flow rate of fluid compartment is large enough, we reduce the six dimensional model to a three dimensional model. Mathematically rigorous results are derived providing sufficient conditions for treatment success. Persistence theory is used to derive conditions under which the periodic solution for treatment failure is obtained. We also discuss the phenomenon of bi-stability where both infection-free state and infection state are locally stable when antibiotic dosing is marginal. In addition, we derive the optimal antibiotic application protocols for different scenarios using control theory and show that such treatments ensure bacteria elimination for a wide variety of cases. The results show that bacteria are successfully eliminated if the discrete treatment is given at an early stage in the infection or if the optimal protocol is adopted. Finally, we examine factors which if changed can result in treatment success of the previously treatment failure cases for the non-optimal technique.

  3. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  4. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  5. An optimization model for the US Air-Traffic System

    NASA Technical Reports Server (NTRS)

    Mulvey, J. M.

    1986-01-01

    A systematic approach for monitoring U.S. air traffic was developed in the context of system-wide planning and control. Towards this end, a network optimization model with nonlinear objectives was chosen as the central element in the planning/control system. The network representation was selected because: (1) it provides a comprehensive structure for depicting essential aspects of the air traffic system, (2) it can be solved efficiently for large scale problems, and (3) the design can be easily communicated to non-technical users through computer graphics. Briefly, the network planning models consider the flow of traffic through a graph as the basic structure. Nodes depict locations and time periods for either individual planes or for aggregated groups of airplanes. Arcs define variables as actual airplanes flying through space or as delays across time periods. As such, a special case of the network can be used to model the so called flow control problem. Due to the large number of interacting variables and the difficulty in subdividing the problem into relatively independent subproblems, an integrated model was designed which will depict the entire high level (above 29000 feet) jet route system for the 48 contiguous states in the U.S. As a first step in demonstrating the concept's feasibility a nonlinear risk/cost model was developed for the Indianapolis Airspace. The nonlinear network program --NLPNETG-- was employed in solving the resulting test cases. This optimization program uses the Truncated-Newton method (quadratic approximation) for determining the search direction at each iteration in the nonlinear algorithm. It was shown that aircraft could be re-routed in an optimal fashion whenever traffic congestion increased beyond an acceptable level, as measured by the nonlinear risk function.

  6. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  7. Parallelism and optimization of numerical ocean forecasting model

    NASA Astrophysics Data System (ADS)

    Xu, Jianliang; Pang, Renbo; Teng, Junhua; Liang, Hongtao; Yang, Dandan

    2016-10-01

    According to the characteristics of Chinese marginal seas, the Marginal Sea Model of China (MSMC) has been developed independently in China. Because the model requires long simulation time, as a routine forecasting model, the parallelism of MSMC becomes necessary to be introduced to improve the performance of it. However, some methods used in MSMC, such as Successive Over Relaxation (SOR) algorithm, are not suitable for parallelism. In this paper, methods are developedto solve the parallel problem of the SOR algorithm following the steps as below. First, based on a 3D computing grid system, an automatic data partition method is implemented to dynamically divide the computing grid according to computing resources. Next, based on the characteristics of the numerical forecasting model, a parallel method is designed to solve the parallel problem of the SOR algorithm. Lastly, a communication optimization method is provided to avoid the cost of communication. In the communication optimization method, the non-blocking communication of Message Passing Interface (MPI) is used to implement the parallelism of MSMC with complex physical equations, and the process of communication is overlapped with the computations for improving the performance of parallel MSMC. The experiments show that the parallel MSMC runs 97.2 times faster than the serial MSMC, and root mean square error between the parallel MSMC and the serial MSMC is less than 0.01 for a 30-day simulation (172800 time steps), which meets the requirements of timeliness and accuracy for numerical ocean forecasting products.

  8. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  9. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  10. Parameter Optimization for the Gaussian Model of Folded Proteins

    NASA Astrophysics Data System (ADS)

    Erman, Burak; Erkip, Albert

    2000-03-01

    Recently, we proposed an analytical model of protein folding (B. Erman, K. A. Dill, J. Chem. Phys, 112, 000, 2000) and showed that this model successfully approximates the known minimum energy configurations of two dimensional HP chains. All attractions (covalent and non-covalent) as well as repulsions were treated as if the monomer units interacted with each other through linear spring forces. Since the governing potential of the linear springs are derived from a Gaussian potential, the model is called the ''Gaussian Model''. The predicted conformations from the model for the hexamer and various 9mer sequences all lie on the square lattice, although the model does not contain information about the lattice structure. Results of predictions for chains with 20 or more monomers also agreed well with corresponding known minimum energy lattice structures. However, these predicted conformations did not lie exactly on the square lattice. In the present work, we treat the specific problem of optimizing the potentials (the strengths of the spring constants) so that the predictions are in better agreement with the known minimum energy structures.

  11. Finite state aeroelastic model for use in rotor design optimization

    NASA Technical Reports Server (NTRS)

    He, Chengjian; Peters, David A.

    1993-01-01

    In this article, a rotor aeroelastic model based on a newly developed finite state dynamic wake, coupled with blade finite element analysis, is described. The analysis is intended for application in rotor blade design optimization. A coupled simultaneous system of differential equations combining blade structural dynamics and aerodynamics is established in a formulation well-suited for design sensitivity computation. Each blade is assumed to be an elastic beam undergoing flap bending, lead-lag bending, elastic twist, and axial deflections. Aerodynamic loads are computed from unsteady blade element theory where the rotor three-dimensional unsteady wake is described by a generalized dynamic wake model. Correlation of results obtained from the analysis with flight test data is provided to assess model accuracy.

  12. Optimal control of CPR procedure using hemodynamic circulation model

    DOEpatents

    Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok

    2007-12-25

    A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.

  13. An improved model for TPV performance predictions and optimization

    NASA Astrophysics Data System (ADS)

    Schroeder, K. L.; Rose, M. F.; Burkhalter, J. E.

    1997-03-01

    Previously a model has been presented for calculating the performance of a TPV system. This model has been revised into a general purpose algorithm, improved in fidelity, and is presented here. The basic model is an energy based formulation and evaluates both the radiant and heat source elements of a combustion based system. Improvements in the radiant calculations include the use of ray tracking formulations and view factors for evaluating various flat plate and cylindrical configurations. Calculation of photocell temperature and performance parameters as a function of position and incident power have also been incorporated. Heat source calculations have been fully integrated into the code by the incorporation of a modified version of the NASA Complex Chemical Equilibrium Compositions and Applications (CEA) code. Additionally, coding has been incorporated to allow optimization of various system parameters and configurations. Several examples cases are presented and compared, and an optimum flat plate emitter/filter/photovoltaic configuration is also described.

  14. A comparison of motor submodels in the optimal control model

    NASA Technical Reports Server (NTRS)

    Lancraft, R. E.; Kleinman, D. L.

    1978-01-01

    Properties of several structural variations in the neuromotor interface portion of the optimal control model (OCM) are investigated. For example, it is known that commanding control-rate introduces an open-loop pole at S=O and will generate low frequency phase and magnitude characteristics similar to experimental data. However, this gives rise to unusually high sensitivities with respect to motor and sensor noise-ratios, thereby reducing the models' predictive capabilities. Relationships for different motor submodels are discussed to show sources of these sensitivities. The models investigated include both pseudo motor-noise and actual (system driving) motor-noise characterizations. The effects of explicit proprioceptive feedback in the OCM is also examined. To show graphically the effects of each submodel on system outputs, sensitivity studies are included, and compared to data obtained from other tests.

  15. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  16. Automated Finite Element Modeling of Wing Structures for Shape Optimization

    NASA Technical Reports Server (NTRS)

    Harvey, Michael Stephen

    1993-01-01

    The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.

  17. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework

    EPA Science Inventory

    Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...

  18. WE-D-BRE-04: Modeling Optimal Concurrent Chemotherapy Schedules

    SciTech Connect

    Jeong, J; Deasy, J O

    2014-06-15

    Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-kill was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation.

  19. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization.

    PubMed

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448

  20. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization

    PubMed Central

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448

  1. Optimal aeroassisted coplanar orbital transfer using an energy model

    NASA Astrophysics Data System (ADS)

    Halyo, Nesim; Taylor, Deborah B.

    1989-05-01

    The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.

  2. Multi-objective optimization for model predictive control.

    PubMed

    Wojsznis, Willy; Mehta, Ashish; Wojsznis, Peter; Thiele, Dirk; Blevins, Terry

    2007-06-01

    This paper presents a technique of multi-objective optimization for Model Predictive Control (MPC) where the optimization has three levels of the objective function, in order of priority: handling constraints, maximizing economics, and maintaining control. The greatest weights are assigned dynamically to control or constraint variables that are predicted to be out of their limits. The weights assigned for economics have to out-weigh those assigned for control objectives. Control variables (CV) can be controlled at fixed targets or within one- or two-sided ranges around the targets. Manipulated Variables (MV) can have assigned targets too, which may be predefined values or current actual values. This MV functionality is extremely useful when economic objectives are not defined for some or all the MVs. To achieve this complex operation, handle process outputs predicted to go out of limits, and have a guaranteed solution for any condition, the technique makes use of the priority structure, penalties on slack variables, and redefinition of the constraint and control model. An engineering implementation of this approach is shown in the MPC embedded in an industrial control system. The optimization and control of a distillation column, the standard Shell heavy oil fractionator (HOF) problem, is adequately achieved with this MPC. PMID:17382946

  3. Highly optimized tight-binding model of silicon

    SciTech Connect

    Lenosky, T.J.; Kress, J.D.; Kwon, I.; Voter, A.F.; Edwards, B.; Richards, D.F.; Yang, S.; Adams, J.B.

    1997-01-01

    We have fit an orthogonal tight-binding model of silicon with a minimal (s,p) basis and a repulsive pair potential. The pair potential and the tight-binding matrix elements are represented as cubic splines with a 5.24-{Angstrom} fixed radial cutoff in order to allow maximum flexibility. Using a numerical procedure, the spline parameters were fit to simultaneously optimize agreement with {ital ab initio} force and energy data on clusters, liquid, and amorphous systems as well as experimental elastic constants, phonon frequencies, and Gr{umlt u}neisen parameter values. Many such fits were performed to obtain a potential that we judged to be optimal, within the implicit limitations of our potential form. The resulting optimized potential describes many properties very accurately and should be a useful model given its relative simplicity and speed. Our fitting method is not difficult to apply and should be applicable to many other systems. {copyright} {ital 1997} {ital The American Physical Society}

  4. Optimal aeroassisted coplanar orbital transfer using an energy model

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Taylor, Deborah B.

    1989-01-01

    The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.

  5. Verification of immune response optimality through cybernetic modeling.

    PubMed

    Batt, B C; Kompala, D S

    1990-02-01

    An immune response cascade that is T cell independent begins with the stimulation of virgin lymphocytes by antigen to differentiate into large lymphocytes. These immune cells can either replicate themselves or differentiate into plasma cells or memory cells. Plasma cells produce antibody at a specific rate up to two orders of magnitude greater than large lymphocytes. However, plasma cells have short life-spans and cannot replicate. Memory cells produce only surface antibody, but in the event of a subsequent infection by the same antigen, memory cells revert rapidly to large lymphocytes. Immunologic memory is maintained throughout the organism's lifetime. Many immunologists believe that the optimal response strategy calls for large lymphocytes to replicate first, then differentiate into plasma cells and when the antigen has been nearly eliminated, they form memory cells. A mathematical model incorporating the concept of cybernetics has been developed to study the optimality of the immune response. Derived from the matching law of microeconomics, cybernetic variables control the allocation of large lymphocytes to maximize the instantaneous antibody production rate at any time during the response in order to most efficiently inactivate the antigen. A mouse is selected as the model organism and bacteria as the replicating antigen. In addition to verifying the optimal switching strategy, results showing how the immune response is affected by antigen growth rate, initial antigen concentration, and the number of antibodies required to eliminate an antigen are included. PMID:2338827

  6. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    NASA Technical Reports Server (NTRS)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  7. Multisource modeling of flattening filter free (FFF) beam and the optimization of model parameters

    SciTech Connect

    Cho, Woong; Kielar, Kayla N.; Mok, Ed; Xing Lei; Park, Jeong-Hoon; Jung, Won-Gyun; Suh, Tae-Suk

    2011-04-15

    Purpose: With the introduction of flattening filter free (FFF) linear accelerators to radiation oncology, new analytical source models for a FFF beam applicable to current treatment planning systems is needed. In this work, a multisource model for the FFF beam and the optimization of involved model parameters were designed. Methods: The model is based on a previous three source model proposed by Yang et al. [''A three-source model for the calculation of head scatter factors,'' Med. Phys. 29, 2024-2033 (2002)]. An off axis ratio (OAR) of photon fluence was introduced to the primary source term to generate cone shaped profiles. The parameters of the source model were determined from measured head scatter factors using a line search optimization technique. The OAR of the photon fluence was determined from a measured dose profile of a 40x40 cm{sup 2} field size with the same optimization technique, but a new method to acquire gradient terms for OARs was developed to enhance the speed of the optimization process. The improved model was validated with measured dose profiles from 3x3 to 40x40 cm{sup 2} field sizes at 6 and 10 MV from a TrueBeam STx linear accelerator. Furthermore, planar dose distributions for clinically used radiation fields were also calculated and compared to measurements using a 2D array detector using the gamma index method. Results: All dose values for the calculated profiles agreed with the measured dose profiles within 0.5% at 6 and 10 MV beams, except for some low dose regions for larger field sizes. A slight overestimation was seen in the lower penumbra region near the field edge for the large field sizes by 1%-4%. The planar dose calculations showed comparable passing rates (>98%) when the criterion of the gamma index method was selected to be 3%/3 mm. Conclusions: The developed source model showed good agreements between measured and calculated dose distributions. The model is easily applicable to any other linear accelerator using FFF beams

  8. Traveling waves in an optimal velocity model of freeway traffic.

    PubMed

    Berg, P; Woods, A

    2001-03-01

    Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137]. PMID:11308709

  9. Traveling waves in an optimal velocity model of freeway traffic.

    PubMed

    Berg, P; Woods, A

    2001-03-01

    Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137].

  10. Traveling waves in an optimal velocity model of freeway traffic

    NASA Astrophysics Data System (ADS)

    Berg, Peter; Woods, Andrew

    2001-03-01

    Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137].

  11. Considerations for parameter optimization and sensitivity in climate models.

    PubMed

    Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E

    2010-12-14

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841

  12. Considerations for parameter optimization and sensitivity in climate models

    PubMed Central

    Neelin, J. David; Bracco, Annalisa; Luo, Hao; McWilliams, James C.; Meyerson, Joyce E.

    2010-01-01

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention—here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841

  13. Ultradiscrete optimal velocity model: A cellular-automaton model for traffic flow and linear instability of high-flux traffic

    NASA Astrophysics Data System (ADS)

    Kanai, Masahiro; Isojima, Shin; Nishinari, Katsuhiro; Tokihiro, Tetsuji

    2009-05-01

    In this paper, we propose the ultradiscrete optimal velocity model, a cellular-automaton model for traffic flow, by applying the ultradiscrete method for the optimal velocity model. The optimal velocity model, defined by a differential equation, is one of the most important models; in particular, it successfully reproduces the instability of high-flux traffic. It is often pointed out that there is a close relation between the optimal velocity model and the modified Korteweg-de Vries (mkdV) equation, a soliton equation. Meanwhile, the ultradiscrete method enables one to reduce soliton equations to cellular automata which inherit the solitonic nature, such as an infinite number of conservation laws, and soliton solutions. We find that the theory of soliton equations is available for generic differential equations and the simulation results reveal that the model obtained reproduces both absolutely unstable and convectively unstable flows as well as the optimal velocity model.

  14. Optimal control model of arm configuration in a reaching task

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Gary T.; Kakavand, Ali

    1996-05-01

    It was hypothesized that the configuration of the upper limb during a hand static positioning task could be predicted using a dynamic musculoskeletal model and an optimal control routine. Both rhesus monkey and human upper extremity models were formulated, and had seven degrees of freedom (7-DOF) and 39 musculotendon pathways. A variety of configurations were generated about a physiologically measured configuration using the dynamic models and perturbations. The pseudoinverse optimal control method was applied to compute the minimum cost C at each of the generated configurations. Cost function C is described by the Crowninshield-Brand (1981) criterion which relates C (the sum of muscle stresses squared) to the endurance time of a physiological task. The configuration with the minimum cost was compared to the configurations chosen by one monkey (four trials) and by eight human subjects (eight trials each). Results are generally good, but not for all joint angles, suggesting that muscular effort is likely to be one major factor in choosing a preferred static arm posture.

  15. Regional optimization model for locating supplemental recycling depots.

    PubMed

    Lin, Hung-Yueh; Chen, Guan-Hwa

    2009-05-01

    In Taiwan, vendors and businesses that sell products belonging to six classes of recyclable materials are required to provide recycling containers at their local retail stores. The integration of these private sector facilities with the recycling depots established by local authorities has the potential to significantly improve residential access to the recycling process. An optimization model is accordingly developed in this work to assist local authorities with the identification of regions that require additional recycling depots for better access and integration with private facilities. Spatial accessibility, population loading and integration efficiency indicators are applied to evaluate whether or not a geographic region is in need of new recycling depots. The program developed here uses a novel algorithm to obtain the optimal solution by a complete enumeration of all cells making up the study area. A case study of a region in Central Taiwan is presented to demonstrate the use of the proposed model and the three indicators. The case study identifies regions without recycling points, prioritizes them based on population density, and considers the option of establishing recycling centers that are able to collect multiple classes of recycling materials. The model is able to generate information suitable for the consideration of decision-makers charged with prioritizing the installation of new recycling facilities.

  16. Multi-model groundwater-management optimization: reconciling disparate conceptual models

    NASA Astrophysics Data System (ADS)

    Timani, Bassel; Peralta, Richard

    2015-09-01

    Disagreement among policymakers often involves policy issues and differences between the decision makers' implicit utility functions. Significant disagreement can also exist concerning conceptual models of the physical system. Disagreement on the validity of a single simulation model delays discussion on policy issues and prevents the adoption of consensus management strategies. For such a contentious situation, the proposed multi-conceptual model optimization (MCMO) can help stakeholders reach a compromise strategy. MCMO computes mathematically optimal strategies that simultaneously satisfy analogous constraints and bounds in multiple numerical models that differ in boundary conditions, hydrogeologic stratigraphy, and discretization. Shadow prices and trade-offs guide the process of refining the first MCMO-developed `multi-model strategy into a realistic compromise management strategy. By employing automated cycling, MCMO is practical for linear and nonlinear aquifer systems. In this reconnaissance study, MCMO application to the multilayer Cache Valley (Utah and Idaho, USA) river-aquifer system employs two simulation models with analogous background conditions but different vertical discretization and boundary conditions. The objective is to maximize additional safe pumping (beyond current pumping), subject to constraints on groundwater head and seepage from the aquifer to surface waters. MCMO application reveals that in order to protect the local ecosystem, increased groundwater pumping can satisfy only 40 % of projected water demand increase. To explore the possibility of increasing that pumping while protecting the ecosystem, MCMO clearly identifies localities requiring additional field data. MCMO is applicable to other areas and optimization problems than used here. Steps to prepare comparable sub-models for MCMO use are area-dependent.

  17. Swimming simply: Minimal models and stroke optimization for biological systems

    NASA Astrophysics Data System (ADS)

    Burton, Lisa; Guasto, Jeffrey S.; Stocker, Roman; Hosoi, A. E.

    2012-11-01

    In this talk, we examine how to represent the kinematics of swimming biological systems. We present a new method of extracting optimal curvature-space basis modes from high-speed video microscopy images of motile spermatozoa by tracking their flagellar kinematics. Using as few as two basis modes to characterize the swimmer's shape, we apply resistive force theory to build a model and predict the swimming speed and net translational and rotational displacement of a sperm cell over any given stroke. This low-order representation of motility yields a complete visualization of the system dynamics. The visualization tools provide refined initialization and intuition for global stroke optimization and improve motion planning by taking advantage of symmetries in the shape space to design a stroke that produces a desired net motion. Comparing the predicted optimal strokes to those observed experimentally enables us to rationalize biological motion by identifying possible optimization goals of the organism. This approach is applicable to a wide array of systems at both low and high Reynolds numbers. Battelle Memorial Institute and NSF.

  18. Hypersonic vehicle structural weight prediction using parametric modeling, finite element modeling, and structural optimization

    NASA Astrophysics Data System (ADS)

    Ngo, Dung A.; Koshiba, David A.; Moses, Paul L.

    1993-04-01

    Detailed structural analysis/optimization is required in the conceptual design stage because of the combination of aerodynamic and aerothermodynamic environment. This is a time and manpower consuming activity which is exasperated by constant vehicle moldline changes as a configuration matures. A simple parametric math model is presented that takes into consideration static loads and the geometry and structural weight of a baseline hypersonic vehicle in predicting the structural weight of a new configuration scaled from the baseline. The approach in developing the math model was to consider a generic parametric cross-sectional geometry that could be used to approximate the baseline geometry and to predict the behavior of this baselne when it is scaled to provide performance and design benefits. This mathematical model, calibrated to finite element analysis and structural optimization sizing results, provides accurate weight prediction for a new configuration which has been moderately scaled from a thoroughly analyzed baseline configuration. This paper will present the structural optimization weight results and the math model weight predictions for a baseline configuration and 15 scaled configurations.

  19. Modeling and optimization for a prismatic snapshot imaging polarimeter.

    PubMed

    Luo, Haitao; Oka, Kazuhiko; Hagen, Nathan; Tkaczyk, Tomasz; Dereniak, Eustace L

    2006-11-20

    Thin birefringent prisms placed near an image plane introduce sinusoidal fringes onto a 2D polarized scene making possible a snapshot imaging polarimeter, which encodes polarization information into the modulation of the fringes. This approach was introduced by Oka and Kaneko [Opt. Express 11, 1510 (2003)], who analyzed the instrument through the Mueller calculus. We show that the plane-wave assumption adopted in the Mueller theory can introduce unnecessary error in a polarimeter design. To directly take prism effects such as beam splitting and deviating into accounts we introduce a geometric imaging model, which allows for a versatile simulation of the birefringent prisms and provides a means for optimization. A calcite visible system is investigated as an example, which essentially shows how each design parameter affects the overall image quality and how to modify the polarimeter design to optimize overall performance. The approach is applicable to any prismatic imaging polarimeter with different prism materials and different working wavelengths. PMID:17086247

  20. Timber harvest planning a combined optimization/simulation model

    SciTech Connect

    Arthur, J.L.; Dykstra, D.P.

    1980-11-01

    A special cascading fixed charge model can be used to characterize a forest management planning problem in which the objectives are to identify the optimal shape of forest harvest cutting units and simultaneously to assign facilities for logging those units. A four-part methodology was developed to assist forest managers in analyzing areas proposed for harvesting. This methodology: analyzes harvesting feasibility; computes the optimal solution to the cascading fixed charge problem; undertakes a GASP IV simulation to provide additional information about the proposed harvesting operation; and permits the forest manager to perform a time-cost analysis that may lead to a more realistic, and thus improved, solution. (5 diagrams, 16 references, 3 tables)

  1. Using Cotton Model Simulations to Estimate Optimally Profitable Irrigation Strategies

    NASA Astrophysics Data System (ADS)

    Mauget, S. A.; Leiker, G.; Sapkota, P.; Johnson, J.; Maas, S.

    2011-12-01

    In recent decades irrigation pumping from the Ogallala Aquifer has led to declines in saturated thickness that have not been compensated for by natural recharge, which has led to questions about the long-term viability of agriculture in the cotton producing areas of west Texas. Adopting irrigation management strategies that optimize profitability while reducing irrigation waste is one way of conserving the aquifer's water resource. Here, a database of modeled cotton yields generated under drip and center pivot irrigated and dryland production scenarios is used in a stochastic dominance analysis that identifies such strategies under varying commodity price and pumping cost conditions. This database and analysis approach will serve as the foundation for a web-based decision support tool that will help producers identify optimal irrigation treatments under specified cotton price, electricity cost, and depth to water table conditions.

  2. Vibroacoustic optimization using a statistical energy analysis model

    NASA Astrophysics Data System (ADS)

    Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia

    2016-08-01

    In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.

  3. Optimal dividends in the Brownian motion risk model with interest

    NASA Astrophysics Data System (ADS)

    Fang, Ying; Wu, Rong

    2009-07-01

    In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.

  4. Design Oriented Structural Modeling for Airplane Conceptual Design Optimization

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.

  5. Proficient brain for optimal performance: the MAP model perspective.

    PubMed

    Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557

  6. Proficient brain for optimal performance: the MAP model perspective

    PubMed Central

    di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557

  7. Modeling and Optimization for Management of Intermittent Water Supply

    NASA Astrophysics Data System (ADS)

    Lieb, A. M.; Wilkening, J.; Rycroft, C.

    2014-12-01

    In many urban areas, piped water is supplied only intermittently, as valves direct water to different parts of the water distribution system at different times. The flow is transient, and may transition between free-surface and pressurized, resulting in complex dynamical features with important consequences for water suppliers and users. These consequences include degradation of distribution system components, compromised water quality, and inequitable water availability. The goal of this work is to model the important dynamics and identify operating conditions that mitigate certain negative effects of intermittent water supply. Specifically, we will look at controlling valve parameters occurring as boundary conditions in a network model of transient, transition flow through closed pipes. Gradient-based optimization will be used to find boundary values to minimize pressure gradients and ensure equitable water availability at system endpoints.

  8. Numerical Modeling and Optimization of Warm-water Heat Sinks

    NASA Astrophysics Data System (ADS)

    Hadad, Yaser; Chiarot, Paul

    2015-11-01

    For cooling in large data-centers and supercomputers, water is increasingly replacing air as the working fluid in heat sinks. Utilizing water provides unique capabilities; for example: higher heat capacity, Prandtl number, and convection heat transfer coefficient. The use of warm, rather than chilled, water has the potential to provide increased energy efficiency. The geometric and operating parameters of the heat sink govern its performance. Numerical modeling is used to examine the influence of geometry and operating conditions on key metrics such as thermal and flow resistance. This model also facilitates studies on cooling of electronic chip hot spots and failure scenarios. We report on the optimal parameters for a warm-water heat sink to achieve maximum cooling performance.

  9. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection

    PubMed Central

    Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-01-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  10. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    PubMed

    Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-09-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  11. Computer model for characterizing, screening, and optimizing electrolyte systems

    SciTech Connect

    Gering, Kevin L.

    2015-06-15

    Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.

  12. Optimal vibration control of curved beams using distributed parameter models

    NASA Astrophysics Data System (ADS)

    Liu, Fushou; Jin, Dongping; Wen, Hao

    2016-12-01

    The design of linear quadratic optimal controller using spectral factorization method is studied for vibration suppression of curved beam structures modeled as distributed parameter models. The equations of motion for active control of the in-plane vibration of a curved beam are developed firstly considering its shear deformation and rotary inertia, and then the state space model of the curved beam is established directly using the partial differential equations of motion. The functional gains for the distributed parameter model of curved beam are calculated by extending the spectral factorization method. Moreover, the response of the closed-loop control system is derived explicitly in frequency domain. Finally, the suppression of the vibration at the free end of a cantilevered curved beam by point control moment is studied through numerical case studies, in which the benefit of the presented method is shown by comparison with a constant gain velocity feedback control law, and the performance of the presented method on avoidance of control spillover is demonstrated.

  13. Software for multimodal battlefield signal modeling and optimal sensor placement

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kenneth K.; Vecherin, Sergey N.; Wilson, D. Keith; Borden, Christian T.; Bettencourt, Elizabeth; Pettit, Chris L.

    2012-05-01

    Effective use of passive and active sensors for surveillance, security, and intelligence must consider terrain and atmospheric effects on the sensor performance. Several years ago, U.S. Army ERDC undertook development of software for modeling environmental effects on target signatures, signal propagation, and battlefield sensors for many signal modalities (e.g., optical, acoustic, seismic, magnetic, radio-frequency, chemical, biological, and nuclear). Since its inception, the software, called Environmental Awareness for Sensor and Emitter Employment (EASEE), has matured and evolved significantly for simulating a broad spectrum of signal-transmission and sensing scenarios. The underlying software design involves a flexible, object-oriented approach to the various stages of signal modeling from emission through processing into inferences. A sensor placement algorithm has also been built in for optimizing sensor selections and placements based on specification of sensor supply limitations, coverage priorities, and wireless sensor communication requirements. Some recent and ongoing enhancements are described, including modeling of active sensing scenarios and signal reflections, directivity of signal emissions and sensors, improved handling of signal feature dependencies, extensions to realistically model additional signal modalities such as infrared and RF, and XML-based communication with other calculation and display engines.

  14. Optimization model for UV-Riboflavin corneal cross-linking

    NASA Astrophysics Data System (ADS)

    Schumacher, S.; Wernli, J.; Scherrer, S.; Bueehler, M.; Seiler, T.; Mrochen, M.

    2011-03-01

    Nowadays UV-cross-linking is an established method for the treatment of keraectasia. Currently a standardized protocol is used for the cross-linking treatment. We will now present a theoretical model which predicts the number of induced crosslinks in the corneal tissue, in dependence of the Riboflavin concentration, the radiation intensity, the pre-treatment time and the treatment time. The model is developed by merging the difussion equation, the equation for the light distribution in dependence on the absorbers in the tissue and a rate equation for the polymerization process. A higher concentration of Riboflavin solution as well as a higher irradiation intensity will increase the number of induced crosslinks. However, performed stress-strain experiments which support the model showed that higher Riboflavin concentrations (> 0.125%) do not result in a further increase in stability of the corneal tissue. This is caused by the inhomogeneous distribution of induced crosslinks throughout the cornea due to the uneven absorption of the UV-light. The new model offers the possibility to optimize the treatment individually for every patient depending on their corneal thickness in terms of efficiency, saftey and treatment time.

  15. Performance Optimization of NEMO Oceanic Model at High Resolution

    NASA Astrophysics Data System (ADS)

    Epicoco, Italo; Mocavero, Silvia; Aloisio, Giovanni

    2014-05-01

    The NEMO oceanic model is based on the Navier-Stokes equations along with a nonlinear equation of state, which couples the two active tracers (temperature and salinity) to the fluid velocity. The code is written in Fortan 90 and parallelized using MPI. The resolution of the global ocean models used today for climate change studies limits the prediction accuracy. To overcome this limit, a new high-resolution global model, based on NEMO, simulating at 1/16° and 100 vertical levels has been developed at CMCC. The model is computational and memory intensive, so it requires many resources to be run. An optimization activity is needed. The strategy requires a preliminary analysis to highlight scalability bottlenecks. It has been performed on a SandyBridge architecture at CMCC. An efficiency of 48% on 7K cores (the maximum available) has been achieved. The analysis has been also carried out at routine level, so that the improvement actions could be designed for the entire code or for the single kernel. The analysis highlighted for example a loss of performance due to the routine used to implement the north fold algorithm (i.e. handling the points at the north pole of the 3-poles Grids): indeed an optimization of the routine implementation is needed. The folding is achieved considering only the last 4 rows on the top of the global domain and by applying a rotation pivoting on the point in the middle. During the folding, the point on the top left is updated with the value of the point on bottom right and so on. The current version of the parallel algorithm is based on the domain decomposition. Each MPI process takes care of a block of points. Each process can update its points using values belonging to the symmetric process. In the current implementation, each received message is placed in a buffer with a number of elements equal to the total dimension of the global domain. Each process sweeps the entire buffer, but only a part of that computation is really useful for the

  16. Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael

    2016-01-01

    It has been well documented that decisions made in the early stages of Conceptual and Pre-Conceptual design commit up to 80% of total Life-Cycle Cost (LCC) while engineers know the least about the product they are designing [1]. Once within Preliminary and Detailed design however, making changes to the design becomes far more difficult to enact in both cost and schedule. Primarily this has been due to a lack of detailed data usually uncovered later during the Preliminary and Detailed design phases. In our current budget-constrained environment, making decisions within Conceptual and Pre-Conceptual design which minimize LCC while meeting requirements is paramount to a program's success. Within the arena of launch vehicle design, optimizing the ascent trajectory is critical for minimizing the costs present within such concerns as propellant, aerodynamic, aeroheating, and acceleration loads while meeting requirements such as payload delivered to a desired orbit. In order to optimize the vehicle design its constraints and requirements must be known, however as the design cycle proceeds it is all but inevitable that the conditions will change. Upon that change, the previously optimized trajectory may no longer be optimal, or meet design requirements. The current paradigm for adjusting to these updates is generating point solutions for every change in the design's requirements [2]. This can be a tedious, time-consuming task as changes in virtually any piece of a launch vehicle's design can have a disproportionately large effect on the ascent trajectory, as the solution space of the trajectory optimization problem is both non-linear and multimodal [3]. In addition, an industry standard tool, Program to Optimize Simulated Trajectories (POST), requires an expert analyst to produce simulated trajectories that are feasible and optimal [4]. In a previous publication the authors presented a method for combatting these challenges [5]. In order to bring more detailed information

  17. Optimizing Crawler4j using MapReduce Programming Model

    NASA Astrophysics Data System (ADS)

    Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.

    2016-08-01

    World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.

  18. Modeling marine surface microplastic transport to assess optimal removal locations

    NASA Astrophysics Data System (ADS)

    Sherman, Peter; van Sebille, Erik

    2016-01-01

    Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the impact on ecosystems, using plankton growth as a proxy. The simulations show that the optimal removal locations are primarily located off the coast of China and in the Indonesian Archipelago for both scenarios. Our estimates show that 31% of the modeled microplastic mass can be removed by 2025 using 29 plastic collectors operating at a 45% capture efficiency from these locations, compared to only 17% when the 29 plastic collectors are moored in the North Pacific garbage patch, between Hawaii and California. The overlap of ocean surface microplastics and phytoplankton growth can be reduced by 46% at our proposed locations, while sinks in the North Pacific can only reduce the overlap by 14%. These results are an indication that oceanic plastic removal might be more effective in removing a greater microplastic mass and in reducing potential harm to marine life when closer to shore than inside the plastic accumulation zones in the centers of the gyres.

  19. A new mathematical model in space optimization: A case study

    NASA Astrophysics Data System (ADS)

    Abdullah, Kamilah; Kamis, Nor Hanimah; Sha'ari, Nor Shahida; Muhammad Halim, Nurul Suhada; Hashim, Syaril Naqiah

    2013-04-01

    Most of higher education institutions provide certain area known as learning centre for their students to study or having group discussions. However, some of the learning centers are not provided by optimum number of tables and seats to accommodate the students sufficiently. This study proposed a new mathematical model in optimizing the number of tables and seats at Laman Najib, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA (UiTM) Shah Alam. An improvement of space capacity with maximum number of students who can facilitate the Laman Najib at the same time has been made by considering the type and size of tables that are appropriate for student's discussions. Our finding is compared with the result of Simplex method of linear programming to ensure that our new model is valid and consistent with other existing approaches. As a conclusion, we found that the round-type tables with six seats provide the maximum number of students who can use Laman Najib for their discussions or group studying. Both methods are also practical to use as alternative approaches in solving other space optimization problems.

  20. Optimization of Forward Wave Modeling on Contemporary HPC Architectures

    SciTech Connect

    Krueger, Jens; Micikevicius, Paulius; Williams, Samuel

    2012-07-20

    Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization for TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.

  1. Observer model optimization of a spectral mammography system

    NASA Astrophysics Data System (ADS)

    Fredenberg, Erik; Åslund, Magnus; Cederström, Björn; Lundqvist, Mats; Danielsson, Mats

    2010-04-01

    Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. Contrast-enhanced spectral imaging has been thoroughly investigated, but unenhanced imaging may be more useful because it comes as a bonus to the conventional non-energy-resolved absorption image at screening; there is no additional radiation dose and no need for contrast medium. We have used a previously developed theoretical framework and system model that include quantum and anatomical noise to characterize the performance of a photon-counting spectral mammography system with two energy bins for unenhanced imaging. The theoretical framework was validated with synthesized images. Optimal combination of the energy-resolved images for detecting large unenhanced tumors corresponded closely, but not exactly, to minimization of the anatomical noise, which is commonly referred to as energy subtraction. In that case, an ideal-observer detectability index could be improved close to 50% compared to absorption imaging. Optimization with respect to the signal-to-quantum-noise ratio, commonly referred to as energy weighting, deteriorated detectability. For small microcalcifications or tumors on uniform backgrounds, however, energy subtraction was suboptimal whereas energy weighting provided a minute improvement. The performance was largely independent of beam quality, detector energy resolution, and bin count fraction. It is clear that inclusion of anatomical noise and imaging task in spectral optimization may yield completely different results than an analysis based solely on quantum noise.

  2. Multi-level systems modeling and optimization for novel aircraft

    NASA Astrophysics Data System (ADS)

    Subramanian, Shreyas Vathul

    This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission

  3. Tool Steel Heat Treatment Optimization Using Neural Network Modeling

    NASA Astrophysics Data System (ADS)

    Podgornik, Bojan; Belič, Igor; Leskovšek, Vojteh; Godec, Matjaz

    2016-08-01

    Optimization of tool steel properties and corresponding heat treatment is mainly based on trial and error approach, which requires tremendous experimental work and resources. Therefore, there is a huge need for tools allowing prediction of mechanical properties of tool steels as a function of composition and heat treatment process variables. The aim of the present work was to explore the potential and possibilities of artificial neural network-based modeling to select and optimize vacuum heat treatment conditions depending on the hot work tool steel composition and required properties. In the current case training of the feedforward neural network with error backpropagation training scheme and four layers of neurons (8-20-20-2) scheme was based on the experimentally obtained tempering diagrams for ten different hot work tool steel compositions and at least two austenitizing temperatures. Results show that this type of modeling can be successfully used for detailed and multifunctional analysis of different influential parameters as well as to optimize heat treatment process of hot work tool steels depending on the composition. In terms of composition, V was found as the most beneficial alloying element increasing hardness and fracture toughness of hot work tool steel; Si, Mn, and Cr increase hardness but lead to reduced fracture toughness, while Mo has the opposite effect. Optimum concentration providing high KIc/HRC ratios would include 0.75 pct Si, 0.4 pct Mn, 5.1 pct Cr, 1.5 pct Mo, and 0.5 pct V, with the optimum heat treatment performed at lower austenitizing and intermediate tempering temperatures.

  4. Tool Steel Heat Treatment Optimization Using Neural Network Modeling

    NASA Astrophysics Data System (ADS)

    Podgornik, Bojan; Belič, Igor; Leskovšek, Vojteh; Godec, Matjaz

    2016-11-01

    Optimization of tool steel properties and corresponding heat treatment is mainly based on trial and error approach, which requires tremendous experimental work and resources. Therefore, there is a huge need for tools allowing prediction of mechanical properties of tool steels as a function of composition and heat treatment process variables. The aim of the present work was to explore the potential and possibilities of artificial neural network-based modeling to select and optimize vacuum heat treatment conditions depending on the hot work tool steel composition and required properties. In the current case training of the feedforward neural network with error backpropagation training scheme and four layers of neurons (8-20-20-2) scheme was based on the experimentally obtained tempering diagrams for ten different hot work tool steel compositions and at least two austenitizing temperatures. Results show that this type of modeling can be successfully used for detailed and multifunctional analysis of different influential parameters as well as to optimize heat treatment process of hot work tool steels depending on the composition. In terms of composition, V was found as the most beneficial alloying element increasing hardness and fracture toughness of hot work tool steel; Si, Mn, and Cr increase hardness but lead to reduced fracture toughness, while Mo has the opposite effect. Optimum concentration providing high KIc/HRC ratios would include 0.75 pct Si, 0.4 pct Mn, 5.1 pct Cr, 1.5 pct Mo, and 0.5 pct V, with the optimum heat treatment performed at lower austenitizing and intermediate tempering temperatures.

  5. A canopy-type similarity model for wind farm optimization

    NASA Astrophysics Data System (ADS)

    Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando

    2013-04-01

    The atmospheric boundary layer (ABL) flow through and over wind farms has been found to be similar to canopy-type flows, with characteristic flow development and shear penetration length scales (Markfort et al., 2012). Wind farms capture momentum from the ABL both at the leading edge and from above. We examine this further with an analytical canopy-type model. Within the flow development region, momentum is advected into the wind farm and wake turbulence draws excess momentum in from between turbines. This spatial heterogeneity of momentum within the wind farm is characterized by large dispersive momentum fluxes. Once the flow within the farm is developed, the area-averaged velocity profile exhibits a characteristic inflection point near the top of the wind farm, similar to that of canopy-type flows. The inflected velocity profile is associated with the presence of a dominant characteristic turbulence scale, which may be responsible for a significant portion of the vertical momentum flux. Prediction of this scale is useful for determining the amount of available power for harvesting. The new model is tested with results from wind tunnel experiments, which were conducted to characterize the turbulent flow in and above model wind farms in aligned and staggered configurations. The model is useful for representing wind farms in regional scale models, for the optimization of wind farms considering wind turbine spacing and layout configuration, and for assessing the impacts of upwind wind farms on nearby wind resources. Markfort CD, W Zhang and F Porté-Agel. 2012. Turbulent flow and scalar transport through and over aligned and staggered wind farms. Journal of Turbulence. 13(1) N33: 1-36. doi:10.1080/14685248.2012.709635.

  6. Optimal hemodynamic response model for functional near-infrared spectroscopy

    PubMed Central

    Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668

  7. Modeling and optimization of energy storage system for microgrid

    NASA Astrophysics Data System (ADS)

    Qiu, Xin

    The vanadium redox flow battery (VRB) is well suited for the applications of microgrid and renewable energy. This thesis will have a practical analysis of the battery itself and its application in microgrid systems. The first paper analyzes the VRB use in a microgrid system. The first part of the paper develops a reduced order circuit model of the VRB and analyzes its experimental performance efficiency during deployment. The statistical methods and neural network approximation are used to estimate the system parameters. The second part of the paper addresses the implementation issues of the VRB application in a photovoltaic-based microgrid system. A new dc-dc converter was proposed to provide improved charging performance. The paper was published on IEEE Transactions on Smart Grid, Vol. 5, No. 4, July 2014. The second paper studies VRB use within a microgrid system from a practical perspective. A reduced order circuit model of the VRB is introduced that includes the losses from the balance of plant including system and environmental controls. The proposed model includes the circulation pumps and the HVAC system that regulates the environment of the VRB enclosure. In this paper, the VRB model is extended to include the ESS environmental controls to provide a model that provides a more realistic efficiency profile. The paper was submitted to IEEE Transactions on Sustainable Energy. Third paper discussed the optimal control strategy when VRB works with other type of battery in a microgird system. The work in first paper is extended. A high level control strategy is developed to coordinate a lead acid battery and a VRB with reinforcement learning. The paper is to be submitted to IEEE Transactions on Smart Grid.

  8. Optimizing nanomedicine pharmacokinetics using physiologically based pharmacokinetics modelling

    PubMed Central

    Moss, Darren Michael; Siccardi, Marco

    2014-01-01

    The delivery of therapeutic agents is characterized by numerous challenges including poor absorption, low penetration in target tissues and non-specific dissemination in organs, leading to toxicity or poor drug exposure. Several nanomedicine strategies have emerged as an advanced approach to enhance drug delivery and improve the treatment of several diseases. Numerous processes mediate the pharmacokinetics of nanoformulations, with the absorption, distribution, metabolism and elimination (ADME) being poorly understood and often differing substantially from traditional formulations. Understanding how nanoformulation composition and physicochemical properties influence drug distribution in the human body is of central importance when developing future treatment strategies. A helpful pharmacological tool to simulate the distribution of nanoformulations is represented by physiologically based pharmacokinetics (PBPK) modelling, which integrates system data describing a population of interest with drug/nanoparticle in vitro data through a mathematical description of ADME. The application of PBPK models for nanomedicine is in its infancy and characterized by several challenges. The integration of property–distribution relationships in PBPK models may benefit nanomedicine research, giving opportunities for innovative development of nanotechnologies. PBPK modelling has the potential to improve our understanding of the mechanisms underpinning nanoformulation disposition and allow for more rapid and accurate determination of their kinetics. This review provides an overview of the current knowledge of nanomedicine distribution and the use of PBPK modelling in the characterization of nanoformulations with optimal pharmacokinetics. Linked Articles This article is part of a themed section on Nanomedicine. To view the other articles in this section visit http://dx.doi.org/10.1111/bph.2014.171.issue-17 PMID:24467481

  9. Pulsed pumping process optimization using a potential flow model

    NASA Astrophysics Data System (ADS)

    Tenney, C. M.; Lastoskie, C. M.

    2007-08-01

    A computational model is applied to the optimization of pulsed pumping systems for efficient in situ remediation of groundwater contaminants. In the pulsed pumping mode of operation, periodic rather than continuous pumping is used. During the pump-off or trapping phase, natural gradient flow transports contaminated groundwater into a treatment zone surrounding a line of injection and extraction wells that transect the contaminant plume. Prior to breakthrough of the contaminated water from the treatment zone, the wells are activated and the pump-on or treatment phase ensues, wherein extracted water is augmented to stimulate pollutant degradation and recirculated for a sufficient period of time to achieve mandated levels of contaminant removal. An important design consideration in pulsed pumping groundwater remediation systems is the pumping schedule adopted to best minimize operational costs for the well grid while still satisfying treatment requirements. Using an analytic two-dimensional potential flow model, optimal pumping frequencies and pumping event durations have been investigated for a set of model aquifer-well systems with different well spacings and well-line lengths, and varying aquifer physical properties. The results for homogeneous systems with greater than five wells and moderate to high pumping rates are reduced to a single, dimensionless correlation. Results for heterogeneous systems are presented graphically in terms of dimensionless parameters to serve as an efficient tool for initial design and selection of the pumping regimen best suited for pulsed pumping operation for a particular well configuration and extraction rate. In the absence of significant retardation or degradation during the pump-off phase, average pumping rates for pulsed operation were found to be greater than the continuous pumping rate required to prevent contaminant breakthrough.

  10. Pulsed pumping process optimization using a potential flow model.

    PubMed

    Tenney, C M; Lastoskie, C M

    2007-08-15

    A computational model is applied to the optimization of pulsed pumping systems for efficient in situ remediation of groundwater contaminants. In the pulsed pumping mode of operation, periodic rather than continuous pumping is used. During the pump-off or trapping phase, natural gradient flow transports contaminated groundwater into a treatment zone surrounding a line of injection and extraction wells that transect the contaminant plume. Prior to breakthrough of the contaminated water from the treatment zone, the wells are activated and the pump-on or treatment phase ensues, wherein extracted water is augmented to stimulate pollutant degradation and recirculated for a sufficient period of time to achieve mandated levels of contaminant removal. An important design consideration in pulsed pumping groundwater remediation systems is the pumping schedule adopted to best minimize operational costs for the well grid while still satisfying treatment requirements. Using an analytic two-dimensional potential flow model, optimal pumping frequencies and pumping event durations have been investigated for a set of model aquifer-well systems with different well spacings and well-line lengths, and varying aquifer physical properties. The results for homogeneous systems with greater than five wells and moderate to high pumping rates are reduced to a single, dimensionless correlation. Results for heterogeneous systems are presented graphically in terms of dimensionless parameters to serve as an efficient tool for initial design and selection of the pumping regimen best suited for pulsed pumping operation for a particular well configuration and extraction rate. In the absence of significant retardation or degradation during the pump-off phase, average pumping rates for pulsed operation were found to be greater than the continuous pumping rate required to prevent contaminant breakthrough.

  11. Metroplex Optimization Model Expansion and Analysis: The Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM)

    NASA Technical Reports Server (NTRS)

    Sherry, Lance; Ferguson, John; Hoffman, Karla; Donohue, George; Beradino, Frank

    2012-01-01

    This report describes the Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM) that is designed to provide insights into airline decision-making with regards to markets served, schedule of flights on these markets, the type of aircraft assigned to each scheduled flight, load factors, airfares, and airline profits. The main inputs to the model are hedged fuel prices, airport capacity limits, and candidate markets. Embedded in the model are aircraft performance and associated cost factors, and willingness-to-pay (i.e. demand vs. airfare curves). Case studies demonstrate the application of the model for analysis of the effects of increased capacity and changes in operating costs (e.g. fuel prices). Although there are differences between airports (due to differences in the magnitude of travel demand and sensitivity to airfare), the system is more sensitive to changes in fuel prices than capacity. Further, the benefits of modernization in the form of increased capacity could be undermined by increases in hedged fuel prices

  12. Modeling digital breast tomosynthesis imaging systems for optimization studies

    NASA Astrophysics Data System (ADS)

    Lau, Beverly Amy

    Digital breast tomosynthesis (DBT) is a new imaging modality for breast imaging. In tomosynthesis, multiple images of the compressed breast are acquired at different angles, and the projection view images are reconstructed to yield images of slices through the breast. One of the main problems to be addressed in the development of DBT is the optimal parameter settings to obtain images ideal for detection of cancer. Since it would be unethical to irradiate women multiple times to explore potentially optimum geometries for tomosynthesis, it is ideal to use a computer simulation to generate projection images. Existing tomosynthesis models have modeled scatter and detector without accounting for oblique angles of incidence that tomosynthesis introduces. Moreover, these models frequently use geometry-specific physical factors measured from real systems, which severely limits the robustness of their algorithms for optimization. The goal of this dissertation was to design the framework for a computer simulation of tomosynthesis that would produce images that are sensitive to changes in acquisition parameters, so an optimization study would be feasible. A computer physics simulation of the tomosynthesis system was developed. The x-ray source was modeled as a polychromatic spectrum based on published spectral data, and inverse-square law was applied. Scatter was applied using a convolution method with angle-dependent scatter point spread functions (sPSFs), followed by scaling using an angle-dependent scatter-to-primary ratio (SPR). Monte Carlo simulations were used to generate sPSFs for a 5-cm breast with a 1-cm air gap. Detector effects were included through geometric propagation of the image onto layers of the detector, which were blurred using depth-dependent detector point-spread functions (PRFs). Depth-dependent PRFs were calculated every 5-microns through a 200-micron thick CsI detector using Monte Carlo simulations. Electronic noise was added as Gaussian noise as a

  13. Computer model for characterizing, screening, and optimizing electrolyte systems

    2015-06-15

    Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced modelsmore » are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less

  14. Optimization modeling to maximize population access to comprehensive stroke centers

    PubMed Central

    Branas, Charles C.; Kasner, Scott E.; Wolff, Catherine; Williams, Justin C.; Albright, Karen C.; Carr, Brendan G.

    2015-01-01

    Objective: The location of comprehensive stroke centers (CSCs) is critical to ensuring rapid access to acute stroke therapies; we conducted a population-level virtual trial simulating change in access to CSCs using optimization modeling to selectively convert primary stroke centers (PSCs) to CSCs. Methods: Up to 20 certified PSCs per state were selected for conversion to maximize the population with 60-minute CSC access by ground and air. Access was compared across states based on region and the presence of state-level emergency medical service policies preferentially routing patients to stroke centers. Results: In 2010, there were 811 Joint Commission PSCs and 0 CSCs in the United States. Of the US population, 65.8% had 60-minute ground access to PSCs. After adding up to 20 optimally located CSCs per state, 63.1% of the US population had 60-minute ground access and 86.0% had 60-minute ground/air access to a CSC. Across states, median CSC access was 55.7% by ground (interquartile range 35.7%–71.5%) and 85.3% by ground/air (interquartile range 59.8%–92.1%). Ground access was lower in Stroke Belt states compared with non–Stroke Belt states (32.0% vs 58.6%, p = 0.02) and lower in states without emergency medical service routing policies (52.7% vs 68.3%, p = 0.04). Conclusion: Optimal system simulation can be used to develop efficient care systems that maximize accessibility. Under optimal conditions, a large proportion of the US population will be unable to access a CSC within 60 minutes. PMID:25740858

  15. [Optimal models on sustainable management of oases ecosystem in southern margin of Taklamakan Desert].

    PubMed

    Li, X; Zhang, X; Wang, Y; Wu, Y

    2000-12-01

    On the basis of analyzing the distribution feature of water resource and the canal water utilization coefficient of oases in southern margin of Taklamakan Desert, observing the wind prevention efficiency of shelterbelt through a simulation experiment in wind tunnel, and 15 years researching the comprehensive control of desertified land in Cele Oasis, a series of optimal models on sustainable management of oases ecosystem is southern margin of Taklamakan Desert were proposed i.e., the optimal model on "moderated osais", the optimal model on structure of wind-breaks, the optimal model on comprehensive control of desertified land, and the optimal model on planting structure of corps.

  16. Modeling

    SciTech Connect

    Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.

    2005-09-01

    Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.

  17. An optimization model to agroindustrial sector in antioquia (Colombia, South America)

    NASA Astrophysics Data System (ADS)

    Fernandez, J.

    2015-06-01

    This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.

  18. Approximate Optimal Control as a Model for Motor Learning

    ERIC Educational Resources Information Center

    Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.

    2005-01-01

    Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…

  19. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework.

    PubMed

    Yang, Guoxiang; Best, Elly P H

    2015-09-15

    Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. PMID:26188990

  20. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework.

    PubMed

    Yang, Guoxiang; Best, Elly P H

    2015-09-15

    Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions.

  1. Inverse modeling of FIB milling by dose profile optimization

    NASA Astrophysics Data System (ADS)

    Lindsey, S.; Waid, S.; Hobler, G.; Wanzenböck, H. D.; Bertagnolli, E.

    2014-12-01

    FIB technologies possess a unique ability to form topographies that are difficult or impossible to generate with binary etching through typical photo-lithography. The ability to arbitrarily vary the spatial dose distribution and therefore the amount of milling opens possibilities for the production of a wide range of functional structures with applications in biology, chemistry, and optics. However in practice, the realization of these goals is made difficult by the angular dependence of the sputtering yield and redeposition effects that vary as the topography evolves. An inverse modeling algorithm that optimizes dose profiles, defined as the superposition of time invariant pixel dose profiles (determined from the beam parameters and pixel dwell times), is presented. The response of the target to a set of pixel dwell times in modeled by numerical continuum simulations utilizing 1st and 2nd order sputtering and redeposition, the resulting surfaces are evaluated with respect to a target topography in an error minimization routine. Two algorithms for the parameterization of pixel dwell times are presented, a direct pixel dwell time method, and an abstracted method that uses a refineable piecewise linear cage function to generate pixel dwell times from a minimal number of parameters. The cage function method demonstrates great flexibility and efficiency as compared to the direct fitting method with performance enhancements exceeding ∼10× as compared to direct fitting for medium to large simulation sets. Furthermore, the refineable nature of the cage function enables solutions to adapt to the desired target function. The optimization algorithm, although working with stationary dose profiles, is demonstrated to be applicable also outside the quasi-static approximation. Experimental data confirms the viability of the solutions for 5 × 7 μm deep lens like structures defined by 90 pixel dwell times.

  2. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  3. Multi-model Simulation for Optimal Control of Aeroacoustics.

    SciTech Connect

    Collis, Samuel Scott; Chen, Guoquan

    2005-05-01

    Flow-generated noise, especially rotorcraft noise has been a serious concern for bothcommercial and military applications. A particular important noise source for rotor-craft is Blade-Vortex-Interaction (BVI)noise, a high amplitude, impulsive sound thatoften dominates other rotorcraft noise sources. Usually BVI noise is caused by theunsteady flow changes around various rotor blades due to interactions with vorticespreviously shed by the blades. A promising approach for reducing the BVI noise isto use on-blade controls, such as suction/blowing, micro-flaps/jets, and smart struc-tures. Because the design and implementation of such experiments to evaluate suchsystems are very expensive, efficient computational tools coupled with optimal con-trol systems are required to explore the relevant physics and evaluate the feasibilityof using various micro-fluidic devices before committing to hardware.In this thesis the research is to formulate and implement efficient computationaltools for the development and study of optimal control and design strategies for com-plex flow and acoustic systems with emphasis on rotorcraft applications, especiallyBVI noise control problem. The main purpose of aeroacoustic computations is todetermine the sound intensity and directivity far away from the noise source. How-ever, the computational cost of using a high-fidelity flow-physics model across thefull domain is usually prohibitive and itmight also be less accurate because of thenumerical diffusion and other problems. Taking advantage of the multi-physics andmulti-scale structure of this aeroacoustic problem, we develop a multi-model, multi-domain (near-field/far-field) method based on a discontinuous Galerkin discretiza-tion. In this approach the coupling of multi-domains and multi-models is achievedby weakly enforcing continuity of normal fluxes across a coupling surface. For ourinterested aeroacoustics control problem, the adjoint equations that determine thesensitivity of the cost

  4. Managing and learning with multiple models: Objectives and optimization algorithms

    USGS Publications Warehouse

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  5. Optimal Control of Distributed Energy Resources using Model Predictive Control

    SciTech Connect

    Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen

    2012-07-22

    In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.

  6. Sandfish numerical model reveals optimal swimming in sand

    NASA Astrophysics Data System (ADS)

    Maladen, Ryan; Ding, Yang; Kamor, Adam; Slatton, Andrew; Goldman, Daniel

    2009-11-01

    Motivated by experiment and theory examining the undulatory swimming of the sandfish lizard within granular media footnotetextMaladen et. al, Science, 325, 314, 2009, we study a numerical model of the sandfish as it swims within a validated soft sphere Molecular Dynamics granular media simulation. We hypothesize that features of its morphology and undulatory kinematics, and the granular media contribute to effective sand swimming. Our results agree with a resistive force model of the sandfish and show that speed and transport cost are optimized at a ratio of wave amplitude to wavelength of 0.2, irrespective of media properties and preparation. At this ratio, the entry of the animal into the media is fastest at an angle of 20^o, close to the angle of repose. We also find that the sandfish cross-sectional body shape reduces motion induced buoyancy within the granular media and that wave efficiency is sensitive to body-particle friction but independent of particle-particle friction.

  7. A stochastic optimization model for shift scheduling in emergency departments.

    PubMed

    El-Rifai, Omar; Garaix, Thierry; Augusto, Vincent; Xie, Xiaolan

    2015-09-01

    Excessive waiting time in Emergency Departments (ED) can be both a cause of frustration and more importantly, a health concern for patients. Waiting time arises when the demand for work goes beyond the facility's service capacity. ED service capacity mainly depends on human resources and on beds available for patients. In this paper, we focus on human resources organization in an ED and seek to best balance between service quality and working conditions. More specifically, we address the personnel scheduling problem in order to optimize the shift distribution among employees and minimize the total expected patients' waiting time. The problem is also characterized by a multi-stage re-entrant service process. With an appropriate approximation of patients' waiting times, we first propose a stochastic mixed-integer programming model that is solved by a sample average approximation (SAA) approach. The resulting personnel schedules are then evaluated using a discrete-event simulation model. Numerical experiments are then performed with data from a French hospital to compare different personnel scheduling strategies.

  8. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    NASA Technical Reports Server (NTRS)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  9. Optimal control of an asymptotic model of flow separation

    NASA Astrophysics Data System (ADS)

    Qadri, Ubaid; Schmid, Peter; LFC-UK Team

    2015-11-01

    In the presence of surface imperfections, the boundary layer developing over an aircraft wing can separate and reattach, leading to a small separation bubble. We are interested in developing a low-order model that can be used to control the onset of separation at high Reynolds numbers typical of aircraft flight. In contrast to previous studies, we use a high Reynolds number asymptotic description of the Navier-Stokes equations to describe the motion of motion of the fluid. We obtain a steady solution to the nonlinear triple-deck equations for the separated flow over a small bump at high Reynolds numbers. We derive for the first time the adjoint of the nonlinear triple-deck equations and use it to study optimal control of the separated flow. We calculate the sensitivity of the properties of the separation bubble to local base flow modifications and steady forcing. We assess the validity of using this simplified asymptotic model by comparing our results with those obtained using the full Navier-Stokes equations.

  10. 3D modeling and optimization of the ITER ICRH antenna

    NASA Astrophysics Data System (ADS)

    Louche, F.; Dumortier, P.; Durodié, F.; Messiaen, A.; Maggiora, R.; Milanesio, D.

    2011-12-01

    The prediction of the coupling properties of the ITER ICRH antenna necessitates the accurate evaluation of the resistance and reactance matrices. The latter are mostly dependent on the geometry of the array and therefore a model as accurate as possible is needed to precisely compute these matrices. Furthermore simulations have so far neglected the poloidal and toroidal profile of the plasma, and it is expected that the loading by individual straps will vary significantly due to varying strap-plasma distance. To take this curvature into account, some modifications of the alignment of the straps with respect to the toroidal direction are proposed. It is shown with CST Microwave Studio® [1] that considering two segments in the toroidal direction, i.e. a "V-shaped" toroidal antenna, is sufficient. A new CATIA model including this segmentation has been drawn and imported into both MWS and TOPICA [2] codes. Simulations show a good agreement of the impedance matrices in vacuum. Various modifications of the geometry are proposed in order to further optimize the coupling. In particular we study the effect of the strap box parameters and the recess of the vertical septa.

  11. Modelling Optimal Control of Cholera in Communities Linked by Migration.

    PubMed

    Njagarah, J B H; Nyabadza, F

    2015-01-01

    A mathematical model for the dynamics of cholera transmission with permissible controls between two connected communities is developed and analysed. The dynamics of the disease in the adjacent communities are assumed to be similar, with the main differences only reflected in the transmission and disease related parameters. This assumption is based on the fact that adjacent communities often have different living conditions and movement is inclined toward the community with better living conditions. Community specific reproduction numbers are given assuming movement of those susceptible, infected, and recovered, between communities. We carry out sensitivity analysis of the model parameters using the Latin Hypercube Sampling scheme to ascertain the degree of effect the parameters and controls have on progression of the infection. Using principles from optimal control theory, a temporal relationship between the distribution of controls and severity of the infection is ascertained. Our results indicate that implementation of controls such as proper hygiene, sanitation, and vaccination across both affected communities is likely to annihilate the infection within half the time it would take through self-limitation. In addition, although an infection may still break out in the presence of controls, it may be up to 8 times less devastating when compared with the case when no controls are in place. PMID:26246850

  12. Essays on Applied Resource Economics Using Bioeconomic Optimization Models

    NASA Astrophysics Data System (ADS)

    Affuso, Ermanno

    With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.

  13. A Geospatial Model for Remedial Design Optimization and Performance Evaluation

    SciTech Connect

    Madrid, V M; Demir, Z; Gregory, S; Valett, J; Halden, R U

    2002-02-19

    Soil and ground water remediation projects require collection and interpretation of large amounts of spatial data. Two-dimensional (2D) mapping techniques are often inadequate for characterizing complex subsurface conditions at contaminated sites. To interpret data from these sites, we developed a methodology that allows integration of multiple, three-dimensional (3D) data sets for spatial analysis. This methodology was applied to the Department of Energy (DOE) Building 834 Operable Unit at Lawrence Livermore National Laboratory Site 300, in central California. This site is contaminated with a non-aqueous phase liquid (NAPL) mixture consisting of trichloroethene (TCE) and tetrakis (2-ethylbutoxy) silane (TKEBS). In the 1960s and 1970s, releases of this heat-exchange fluid to the environment resulted in TCE concentrations up to 970 mg/kg in soil and dissolved-phase concentrations approaching the solubility limit in a shallow, perched water-bearing zone. A geospatial model was developed using site hydrogeological data, and monitoring data for volatile organic compounds (VOCs) and biogeochemical parameters. The model was used to characterize the distribution of contamination in different geologic media, and to track changes in subsurface contaminant mass related to treatment facility operation and natural attenuation processes. Natural attenuation occurs mainly as microbial reductive dechlorination of TCE which is dependent on the presence of TKEBS, whose fermentation provides the hydrogen required for microbial reductive dechlorination of VOCs. Output of the geospatial model shows that soil vapor extraction (SVE) is incompatible with anaerobic VOC transformation, presumably due to temporary microbial inhibition caused by oxygen influx into the subsurface. Geospatial analysis of monitoring data collected over a three-year period allowed for generation of representative monthly VOC plume maps and dissolved-phase mass estimates. The latter information proved to be

  14. A model based technique for the design of flight directors. [optimal control models

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1973-01-01

    A new technique for designing flight directors is discussed. This technique uses the optimal-control pilot/vehicle model to determine the appropriate control strategy. The dynamics of this control strategy are then incorporated into the director control laws, thereby enabling the pilot to operate at a significantly lower workload. A preliminary design of a control director for maintaining a STOL vehicle on the approach path in the presence of random air turbulence is evaluated. By selecting model parameters in terms of allowable path deviations and pilot workload levels, a set of director laws is achieved which allows improved system performance at reduced workload levels. The pilot acts essentially as a proportional controller with regard to the director signals, and control motions are compatible with those appropriate to status-only displays.

  15. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    NASA Astrophysics Data System (ADS)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  16. Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults

    NASA Astrophysics Data System (ADS)

    Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen

    2016-07-01

    This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.

  17. Androgyny and Attachment Security: Two Related Models of Optimal Personality.

    ERIC Educational Resources Information Center

    Shaver, Phillip R.; And Others

    1996-01-01

    Three studies explore similarities between attachment style typologies and sex role typologies. Both are defined by pairs of dimensions: self model and other model (attachment styles); masculinity, or agency, and femininity, or communion (sex role orientations). Discusses results. (KW)

  18. Heterogeneous Nuclear Reactor Models for Optimal Xenon Control.

    NASA Astrophysics Data System (ADS)

    Gondal, Ishtiaq Ahmad

    Nuclear reactors are generally modeled as homogeneous mixtures of fuel, control, and other materials while in reality they are heterogeneous-homogeneous configurations comprised of fuel and control rods along with other materials. Similarly, for space-time studies of a nuclear reactor, homogeneous, usually one-group diffusion theory, models are used, and the system equations are solved by either nodal or modal expansion approximations. Study of xenon-induced problems has also been carried out using similar models and with the help of dynamic programming or classical calculus of variations or the minimum principle. In this study a thermal nuclear reactor is modeled as a two-dimensional lattice of fuel and control rods placed in an infinite-moderator in plane geometry. The two-group diffusion theory approximation is used for neutron transport. Space -time neutron balance equations are written for two groups and reduced to one space-time algebraic equation by using the two-dimensional Fourier transform. This equation is written at all fuel and control rod locations. Iodine -xenon and promethium-samarium dynamic equations are also written at fuel rod locations only. These equations are then linearized about an equilibrium point which is determined from the steady-state form of the original nonlinear system equations. After studying poisonless criticality, with and without control, and the stability of the open-loop system and after checking its controllability, a performance criterion is defined for the xenon-induced spatial flux oscillation problem in the form of a functional to be minimized. Linear -quadratic optimal control theory is then applied to solve the problem. To perform a variety of different additional useful studies, this formulation has potential for various extensions and variations; for example, different geometry of the problem, with possible extension to three dimensions, heterogeneous -homogeneous formulation to include, for example, homogeneously

  19. Pareto optimal calibration of highly nonlinear reactive transport groundwater models using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Siade, A. J.; Prommer, H.; Welter, D.

    2014-12-01

    Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site

  20. BDO-RFQ Program Complex of Modelling and Optimization of Charged Particle Dynamics

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, D. A.; Ovsyannikov, A. D.; Antropov, I. V.; Kozynchenko, V. A.

    2016-09-01

    The article is dedicated to BDO Code program complex used for modelling and optimization of charged particle dynamics with consideration of interaction in RFQ accelerating structures. The structure of the program complex and its functionality are described; mathematical models of charged particle dynamics, interaction models and methods of optimization are given.

  1. Model optimization of orthotropic distributed-mode loudspeaker using attached masses.

    PubMed

    Lu, Guochao; Shen, Yong

    2009-11-01

    The orthotropic model of the plate is established and the genetic simulated annealing algorithm is developed for optimization of the mode distribution of the orthotropic plate. The experiment results indicate that the orthotropic model can simulate the real plate better. And optimization aimed at the equal distribution of the modes in the orthotropic model is made to improve the corresponding sound pressure responses.

  2. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  3. Optimization of GM(1,1) power model

    NASA Astrophysics Data System (ADS)

    Luo, Dang; Sun, Yu-ling; Song, Bo

    2013-10-01

    GM (1,1) power model is the expansion of traditional GM (1,1) model and Grey Verhulst model. Compared with the traditional models, GM (1,1) power model has the following advantage: The power exponent in the model which best matches the actual data values can be found by certain technology. So, GM (1,1) power model can reflect nonlinear features of the data, simulate and forecast with high accuracy. It's very important to determine the best power exponent during the modeling process. In this paper, according to the GM(1,1) power model of albino equation is Bernoulli equation, through variable substitution, turning it into the GM(1,1) model of the linear albino equation form, and then through the grey differential equation properly built, established GM(1,1) power model, and parameters with pattern search method solution. Finally, we illustrate the effectiveness of the new methods with the example of simulating and forecasting the promotion rates from senior secondary schools to higher education in China.

  4. The optimal licensing contract in a differentiated Stackelberg model.

    PubMed

    Hong, Xianpei; Yang, Lijun; Zhang, Huaige; Zhao, Dan

    2014-01-01

    This paper extends the work of Wang (2002) by considering a differentiated Stackelberg model, when the leader firm is an inside innovator and licenses its new technology by three options, that is, fixed-fee licensing, royalty licensing, and two-part tariff licensing. The main contributions and conclusions of this paper are threefold. First of all, this paper derives a very different result from Wang (2002). We show that, with a nondrastic innovation, royalty licensing is always better than fixed-fee licensing for the innovator; with a drastic innovation, royalty licensing is superior to fixed-fee licensing for small values of substitution coefficient d; however when d becomes closer to 1, neither fee nor royalty licensing will occur. Secondly, this paper shows that the innovator is always better off in case of two-part tariff licensing than fixed-fee licensing no matter what the innovation size is. Thirdly, the innovator always prefers to license its nondrastic innovation by means of a two-part tariff instead of licensing by means of a royalty; however, with a drastic innovation, the optimal licensing strategy can be either a two-part tariff or a royalty, depending upon the differentiation of the goods. PMID:24683342

  5. Optimal SCR Control Using Data-Driven Models

    SciTech Connect

    Stevens, Andrew J.; Sun, Yannan; Lian, Jianming; Devarakonda, Maruthi N.; Parker, Gordon

    2013-04-16

    We present an optimal control solution for the urea injection for a heavy-duty diesel (HDD) selective catalytic reduction (SCR). The approach taken here is useful beyond SCR and could be applied to any system where a control strategy is desired and input-output data is available. For example, the strategy could also be used for the diesel oxidation catalyst (DOC) system. In this paper, we identify and validate a one-step ahead Kalman state-space estimator for downstream NOx using the bench reactor data of an SCR core sample. The test data was acquired using a 2010 Cummins 6.7L ISB production engine with a 2010 Cummins production aftertreatment system. We used a surrogate HDD federal test procedure (FTP), developed at Michigan Technological University (MTU), which simulates the representative transients of the standard FTP cycle, but has less engine speed/load points. The identified state-space model is then used to develop a tunable cost function that simultaneously minimizes NOx emissions and urea usage. The cost function is quadratic and univariate, thus the minimum can be computed analytically. We show the performance of the closed-loop controller in using a reduced-order discrete SCR simulator developed at MTU. Our experiments with the surrogate HDD-FTP data show that the strategy developed in this paper can be used to identify performance bounds for urea dose controllers.

  6. Designing the optimal convolution kernel for modeling the motion blur

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2011-06-01

    Motion blur acts on an image like a two dimensional low pass filter, whose spatial frequency characteristic depends both on the trajectory of the relative motion between the scene and the camera and on the velocity vector variation along it. When motion during exposure is permitted, the conventional, static notions of both the image exposure and the scene-toimage mapping become unsuitable and must be revised to accommodate the image formation dynamics. This paper develops an exact image formation model for arbitrary object-camera relative motion with arbitrary velocity profiles. Moreover, for any motion the camera may operate in either continuous or flutter shutter exposure mode. Its result is a convolution kernel, which is optimally designed for both the given motion and sensor array geometry, and hence permits the most accurate computational undoing of the blurring effects for the given camera required in forensic and high security applications. The theory has been implemented and a few examples are shown in the paper.

  7. Bottom friction optimization for barotropic tide modelling using the HYbrid Coordinate Ocean Model

    NASA Astrophysics Data System (ADS)

    Boutet, Martial; Lathuilière, Cyril; Baraille, Rémy; Son Hoang, Hong; Morel, Yves

    2014-05-01

    We can list several ways to improve tide modelling at a regional or coastal scale: a more precise and refined bathymetry, better boundary conditions (the way they are implemented and the precision of global tide atlases used) and the representation of the dissipation linked to the bottom friction. Nevertheless, the most promising improvement is the bottom friction representation. Indeed, bathymetric databases, especially in coastal areas, are more and more precise and global tide models performances are better than ever (mean discrepancy between models and tide gauges is about 1 cm for M2 tide). Bottom friction is often parameterized with a quadratic term and a constant coefficient generally taken between 2.5 10-3 and 3.0 10-3. Consequently, we need a more physically consistent approach to improve bottom friction in coastal areas. The first improvement is to enable the computation of a time- and space-dependent friction coefficient. It is obtained by vertical integration of a turbulent horizontal velocity profile. The new parameter to be prescribed for the computation is the bottom roughness, z0, that depends on a large panel of physical properties and processes (sediment properties, existence of ripples and dunes, wave-current interactions, ...). The context of increasing computer resources and data availability enables the possibility to use new methods of data assimilation and optimization. The method used for this study is the simultaneous perturbation stochastic approximation (SPSA) which consists in the approximation of the gradient based on a fixed number of cost function measurements, regardless of the dimension of the vector to be estimated. Indeed, each cost function measurement is obtained by randomly perturbing every component of the parameter vector. An important feature of SPSA is its relative ease of implementation. In particular, the method does not require the development of linear and adjoint version of the circulation model. The algorithm is

  8. A Simplified Model of ARIS for Optimal Controller Design

    NASA Technical Reports Server (NTRS)

    Beech, Geoffrey S.; Hampton, R. David; Kross, Denny (Technical Monitor)

    2001-01-01

    Many space-science experiments require active vibration isolation. Boeing's Active Rack Isolation System (ARIS) isolates experiments at the rack (vs. experiment or sub-experiment) level, with multi e experiments per rack. An ARIS-isolated rack typically employs eight actuators and thirteen umbilicals; the umbilicals provide services such as power, data transmission, and cooling. Hampton, et al., used "Kane's method" to develop an analytical, nonlinear, rigid-body model of ARIS that includes full actuator dynamics (inertias). This model, less the umbilicals, was first implemented for simulation by Beech and Hampton; they developed and tested their model using two commercial-off-the-shelf (COTS) software packages. Rupert, et al., added umbilical-transmitted disturbances to this nonlinear model. Because the nonlinear model, even for the untethered system, is both exceedingly complex and "encapsulated" inside these COTS tools, it is largely inaccessible to ARIS controller designers. This paper shows that ISPR rattle-space constraints and small ARIS actuator masses permit considerable model simplification, without significant loss of fidelity. First, for various loading conditions, comparisons are made between the dynamic responses of the nonlinear model (untethered) and a truth model. Then comparisons are made among nonlinear, linearized, and linearized reduced-mass models. It is concluded that these three models all capture the significant system rigid-body dynamics, with the third being preferred due to its relative simplicity.

  9. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  10. Computationally efficient calibration of WATCLASS Hydrologic models using surrogate optimization

    NASA Astrophysics Data System (ADS)

    Kamali, M.; Ponnambalam, K.; Soulis, E. D.

    2007-07-01

    In this approach, exploration of the cost function space was performed with an inexpensive surrogate function, not the expensive original function. The Design and Analysis of Computer Experiments(DACE) surrogate function, which is one type of approximate models, which takes correlation function for error was employed. The results for Monte Carlo Sampling, Latin Hypercube Sampling and Design and Analysis of Computer Experiments(DACE) approximate model have been compared. The results show that DACE model has a good potential for predicting the trend of simulation results. The case study of this document was WATCLASS hydrologic model calibration on Smokey-River watershed.

  11. Optimal selection of Orbital Replacement Unit on-orbit spares - A Space Station system availability model

    NASA Technical Reports Server (NTRS)

    Schwaab, Douglas G.

    1991-01-01

    A mathematical programing model is presented to optimize the selection of Orbital Replacement Unit on-orbit spares for the Space Station. The model maximizes system availability under the constraints of logistics resupply-cargo weight and volume allocations.

  12. Models and optimization of solar-control automotive glasses

    NASA Astrophysics Data System (ADS)

    Blume, Russell Dale

    Efforts to develop automotive glasses with enhanced solar control characteristics have been motivated by the desire for increased consumer comfort, reduced air-conditioning loads, and improved fuel-economy associated with a reduction in the total solar energy transmitted into the automotive interior. In the current investigation, the base soda-lime-silicate glass (72.7 wt.% SiO 2, 14.2% Na2O, 10.0% CaO, 2.5% MgO, 0.6% Al2O 3 with 0.3 Na2SO4 added to the batch as a fining agent) was modified with Fe2O3 (0.0 to 0.8%), NiO (0.0 to 0.15%), CoO (0.0 to 0.15%), V2O5 (0.0 to 0.225%), TiO2 (0.0 to 1.5%), SnO (0.0 to 3.0%), ZnS (0.0 to 0.09%), ZnO (0.0 to 2.0%), CaF2 (0.0 to 2.0%), and P2O5 (0.0 to 2.0%) to exploit reported non-linear mechanistic interactions among the dopants by which the solar-control characteristics of the base glass can be modified. Due to the large number of experimental variables under consideration, a D-optimal experimental design methodology was utilized to model the solar-optical properties as a function of batch composition. The independent variables were defined as the calculated batch concentrations of the primary (Fe2O 3, NiO, CoO, V2O5) and interactive (CaF2 , P2O5, SnO, ZnS, ZnO, TiO2) dopants in the glass. The dependent variable was defined as the apparent optical density over a wavelength range of 300--2700 nm at 10 nm intervals. The model form relating the batch composition to the apparent optical density was a modified Lambert-Beer absorption law; which, in addition to the linear terms, contained quadratic terms of the primary dopants, and a series of binary and ternary non-linear interactions amongst the primary and interactive dopants. Utilizing the developed model, exceptional fit in terms of both the discrete response (the transmission curves) and the integrated response (visible and solar transmittance) were realized. Glasses utilizing Fe2O 3, CoO, NiO, V2O5, ZnO and P2O 5 have generated innovative glasses with substantially improved

  13. Rigorous valid ranges for optimally reduced kinetic models

    SciTech Connect

    Oluwole, Oluwayemisi O.; Bhattacharjee, Binita; Tolsma, John E.; Barton, Paul I.; Green, William H.

    2006-07-15

    Reduced chemical kinetic models are often used in place of a detailed mechanism because of the computational expense of solving the complete set of equations describing the reacting system. Mathematical methods for model reduction are usually associated with a nominal set of reaction conditions for which the model is reduced. The important effects of variability in these nominal conditions are often ignored because there is no convenient way to deal with them. In this work, we introduce a method to identify rigorous valid ranges for reduced models; i.e., the reduced models are guaranteed to replicate the full model to within an error tolerance under all conditions in the identified valid range. Previous methods have estimated valid ranges using a limited set of variables (usually temperature and a few species compositions) and cannot guarantee that the reduced model is accurate at all points in the estimated range. The new method is demonstrated by identifying valid ranges for models reduced from the GRI-Mech 3.0 mechanism with 53 species and 325 reactions, and a truncated propane mechanism with 94 species and 505 reactions based on the comprehensive mechanism of Marinov et al. A library of reduced models is also generated for several prespecified ranges composing a desired state space. The use of these reduced models with error control in reacting flow simulations is demonstrated through an Adaptive Chemistry example. By using the reduced models in the simulation only when they are valid the Adaptive Chemistry solution matches the solution obtained using the detailed mechanism. (author)

  14. Regression Model Optimization for the Analysis of Experimental Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  15. Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Khalilnia, M. H.; Ghaemirad, T.; Abbaspour, R. A.

    2013-09-01

    In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM+ for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75 %.

  16. Optimizing rejection readouts in a corneal allograft transplantation model

    PubMed Central

    Hildebrand, Antonia; Böhringer, Daniel; Betancor, Paola Kammrath; Schlunck, Günther; Reinhard, Thomas

    2016-01-01

    Purpose To evaluate the feasibility of anterior segment spectral domain optic coherence tomography (ASOCT) as rejection readout in a keratoplasty mouse model and to compare ASOCT against the current standard (i.e., a clinical score system). Furthermore, to compare both approaches with respect to intra- and inter-individual observer variability and to calculate a critical point that distinguishes between rejection and non-rejection in ASOCT analysis. Methods Allogeneic penetrating keratoplasties (PKs) were performed using C3H/He donor mice and BALB/c recipient mice; syngeneic transplantations served as controls using BALB/c donors and recipients. Corneal graft rejection was determined with a clinical score. ASOCT was used to determine the central thickness of the corneal grafts in the same animals. The rejection status was corroborated with histopathological examination. Results The median survival time (MST) of the corneal allografts in the wild-type BALB/c mice was 12 days. Allogeneic transplantation led to a 100% rejection rate, whereas signs of rejection after syngeneic transplantation appeared in up to 20% of the mice. Central corneal thickness (CCT) determination via customized software revealed a direct correlation with the clinical score. Receiver operating curve (ROC) analysis confirmed CCT as a valid surrogate for rejection. Calculation of the area under the curve (AUC) revealed a value of 0.88 with an optimal cut-off at 267 pixels. Conclusions An increase in the CCT during acute allogeneic corneal graft rejection significantly correlated with the clinical surrogate parameter “corneal opacity.” ASOCT not only generates source data, but also analysis of the ASOCT data shows lower readout variability and fewer interpreter variations than the clinical score commonly used to define the time point of graft rejection in mice. PMID:27777504

  17. Oneida Tribe of Indians of Wisconsin Energy Optimization Model

    SciTech Connect

    Troge, Michael

    2014-12-01

    Oneida Nation is located in Northeast Wisconsin. The reservation is approximately 96 square miles (8 miles x 12 miles), or 65,000 acres. The greater Green Bay area is east and adjacent to the reservation. A county line roughly splits the reservation in half; the west half is in Outagamie County and the east half is in Brown County. Land use is predominantly agriculture on the west 2/3 and suburban on the east 1/3 of the reservation. Nearly 5,000 tribally enrolled members live in the reservation with a total population of about 21,000. Tribal ownership is scattered across the reservation and is about 23,000 acres. Currently, the Oneida Tribe of Indians of Wisconsin (OTIW) community members and facilities receive the vast majority of electrical and natural gas services from two of the largest investor-owned utilities in the state, WE Energies and Wisconsin Public Service. All urban and suburban buildings have access to natural gas. About 15% of the population and five Tribal facilities are in rural locations and therefore use propane as a primary heating fuel. Wood and oil are also used as primary or supplemental heat sources for a small percent of the population. Very few renewable energy systems, used to generate electricity and heat, have been installed on the Oneida Reservation. This project was an effort to develop a reasonable renewable energy portfolio that will help Oneida to provide a leadership role in developing a clean energy economy. The Energy Optimization Model (EOM) is an exploration of energy opportunities available to the Tribe and it is intended to provide a decision framework to allow the Tribe to make the wisest choices in energy investment with an organizational desire to establish a renewable portfolio standard (RPS).

  18. A general method for exploiting QSAR models in lead optimization.

    PubMed

    Lewis, Richard A

    2005-03-10

    Computer-aided drug design tools can generate many useful and powerful models that explain structure-activity relationship (SAR) observations in a quantitative manner. These models can use many different descriptors, functional forms, and methods from simple linear equations through to multilayer neural nets. Using a model, a medicinal chemist can compute an activity, given a structure, but it is much harder to work out what changes are needed to make a structure more active. The impact of a model on the design process would be greatly enhanced if the model were more interpretable to the bench chemist. This paper describes a new protocol for performing automated iterative quantitative structure-activity relationship (QSAR) studies and presents the results of experiments on two QSAR sets from the literature. The fundamental goal of this work is to try to assist the chemist in his search for what to make next.

  19. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  20. High-throughput generation, optimization and analysis of genome-scale metabolic models.

    SciTech Connect

    Henry, C. S.; DeJongh, M.; Best, A. A.; Frybarger, P. M.; Linsay, B.; Stevens, R. L.

    2010-09-01

    Genome-scale metabolic models have proven to be valuable for predicting organism phenotypes from genotypes. Yet efforts to develop new models are failing to keep pace with genome sequencing. To address this problem, we introduce the Model SEED, a web-based resource for high-throughput generation, optimization and analysis of genome-scale metabolic models. The Model SEED integrates existing methods and introduces techniques to automate nearly every step of this process, taking {approx}48 h to reconstruct a metabolic model from an assembled genome sequence. We apply this resource to generate 130 genome-scale metabolic models representing a taxonomically diverse set of bacteria. Twenty-two of the models were validated against available gene essentiality and Biolog data, with the average model accuracy determined to be 66% before optimization and 87% after optimization.

  1. Energy expenditure during human gait. I - An optimized model.

    PubMed

    Rodrigo, Silvia; Garcia, Isabel; Franco, Marian; Alonso-Vazquez, Ana; Ambrosio, Jorge

    2010-01-01

    Within the framework of multibody dynamics, a 3D large scale neuromusculoskeletal model of the human body is presented. To characterize the dynamics of skeletal muscle, a phenomenological model of energy expenditure was developed for estimating energy consumption during normal locomotion. Such model is able for predicting thermal and mechanical energy liberation under submaximal activation, muscle fiber type, and varying contractile conditions, typically observed in human motion. Future formulations of the indeterminate biomechanical problem, solved through the physiological criteria of minimization of metabolical cost of transport during gait, should consider the role of muscle groups in coordinating multijoint motion. Such an approach is presented in part II of the paper.

  2. Optimal bispectrum constraints on single-field models of inflation

    SciTech Connect

    Anderson, Gemma J.; Regan, Donough; Seery, David E-mail: D.Regan@sussex.ac.uk

    2014-07-01

    We use WMAP 9-year bispectrum data to constrain the free parameters of an 'effective field theory' describing fluctuations in single-field inflation. The Lagrangian of the theory contains a finite number of operators associated with unknown mass scales. Each operator produces a fixed bispectrum shape, which we decompose into partial waves in order to construct a likelihood function. Based on this likelihood we are able to constrain four linearly independent combinations of the mass scales. As an example of our framework we specialize our results to the case of 'Dirac-Born-Infeld' and 'ghost' inflation and obtain the posterior probability for each model, which in Bayesian schemes is a useful tool for model comparison. Our results suggest that DBI-like models with two or more free parameters are disfavoured by the data by comparison with single-parameter models in the same class.

  3. Modeling Illicit Drug Use Dynamics and Its Optimal Control Analysis

    PubMed Central

    2015-01-01

    The global burden of death and disability attributable to illicit drug use, remains a significant threat to public health for both developed and developing nations. This paper presents a new mathematical modeling framework to investigate the effects of illicit drug use in the community. In our model the transmission process is captured as a social “contact” process between the susceptible individuals and illicit drug users. We conduct both epidemic and endemic analysis, with a focus on the threshold dynamics characterized by the basic reproduction number. Using our model, we present illustrative numerical results with a case study in Cape Town, Gauteng, Mpumalanga and Durban communities of South Africa. In addition, the basic model is extended to incorporate time dependent intervention strategies. PMID:26819625

  4. Modeling Illicit Drug Use Dynamics and Its Optimal Control Analysis.

    PubMed

    Mushayabasa, Steady; Tapedzesa, Gift

    2015-01-01

    The global burden of death and disability attributable to illicit drug use, remains a significant threat to public health for both developed and developing nations. This paper presents a new mathematical modeling framework to investigate the effects of illicit drug use in the community. In our model the transmission process is captured as a social "contact" process between the susceptible individuals and illicit drug users. We conduct both epidemic and endemic analysis, with a focus on the threshold dynamics characterized by the basic reproduction number. Using our model, we present illustrative numerical results with a case study in Cape Town, Gauteng, Mpumalanga and Durban communities of South Africa. In addition, the basic model is extended to incorporate time dependent intervention strategies. PMID:26819625

  5. Academic Optimism and Collective Responsibility: An Organizational Model of the Dynamics of Student Achievement

    ERIC Educational Resources Information Center

    Wu, Jason H.

    2013-01-01

    This study was designed to examine the construct of academic optimism and its relationship with collective responsibility in a sample of Taiwan elementary schools. The construct of academic optimism was tested using confirmatory factor analysis, and the whole structural model was tested with a structural equation modeling analysis. The data were…

  6. The analysis of optimal singular controls for SEIR model of tuberculosis

    NASA Astrophysics Data System (ADS)

    Marpaung, Faridawaty; Rangkuti, Yulita M.; Sinaga, Marlina S.

    2014-12-01

    The optimally of singular control for SEIR model of Tuberculosis is analyzed. There are controls that correspond to time of the vaccination and treatment schedule. The optimally of singular control is obtained by differentiate a switching function of the model. The result shows that vaccination and treatment control are singular.

  7. Cooperative recurrent modular neural networks for constrained optimization: a survey of models and applications.

    PubMed

    Kamel, Mohamed S; Xia, Youshen

    2009-03-01

    Constrained optimization problems arise in a wide variety of scientific and engineering applications. Since several single recurrent neural networks when applied to solve constrained optimization problems for real-time engineering applications have shown some limitations, cooperative recurrent neural network approaches have been developed to overcome drawbacks of these single recurrent neural networks. This paper surveys in details work on cooperative recurrent neural networks for solving constrained optimization problems and their engineering applications, and points out their standing models from viewpoint of both convergence to the optimal solution and model complexity. We provide examples and comparisons to shown advantages of these models in the given applications.

  8. Cooperative recurrent modular neural networks for constrained optimization: a survey of models and applications

    PubMed Central

    2008-01-01

    Constrained optimization problems arise in a wide variety of scientific and engineering applications. Since several single recurrent neural networks when applied to solve constrained optimization problems for real-time engineering applications have shown some limitations, cooperative recurrent neural network approaches have been developed to overcome drawbacks of these single recurrent neural networks. This paper surveys in details work on cooperative recurrent neural networks for solving constrained optimization problems and their engineering applications, and points out their standing models from viewpoint of both convergence to the optimal solution and model complexity. We provide examples and comparisons to shown advantages of these models in the given applications. PMID:19003467

  9. Optimization-driven identification of genetic perturbations accelerates the convergence of model parameters in ensemble modeling of metabolic networks.

    PubMed

    Zomorrodi, Ali R; Lafontaine Rivera, Jimmy G; Liao, James C; Maranas, Costas D

    2013-09-01

    The ensemble modeling (EM) approach has shown promise in capturing kinetic and regulatory effects in the modeling of metabolic networks. Efficacy of the EM procedure relies on the identification of model parameterizations that adequately describe all observed metabolic phenotypes upon perturbation. In this study, we propose an optimization-based algorithm for the systematic identification of genetic/enzyme perturbations to maximally reduce the number of models retained in the ensemble after each round of model screening. The key premise here is to design perturbations that will maximally scatter the predicted steady-state fluxes over the ensemble parameterizations. We demonstrate the applicability of this procedure for an Escherichia coli metabolic model of central metabolism by successively identifying single, double, and triple enzyme perturbations that cause the maximum degree of flux separation between models in the ensemble. Results revealed that optimal perturbations are not always located close to reaction(s) whose fluxes are measured, especially when multiple perturbations are considered. In addition, there appears to be a maximum number of simultaneous perturbations beyond which no appreciable increase in the divergence of flux predictions is achieved. Overall, this study provides a systematic way of optimally designing genetic perturbations for populating the ensemble of models with relevant model parameterizations.

  10. OPTIMIZING MODEL PERFORMANCE: VARIABLE SIZE RESOLUTION IN CLOUD CHEMISTRY MODELING. (R826371C005)

    EPA Science Inventory

    Under many conditions size-resolved aqueous-phase chemistry models predict higher sulfate production rates than comparable bulk aqueous-phase models. However, there are special circumstances under which bulk and size-resolved models offer similar predictions. These special con...

  11. Process Cost Modeling for Multi-Disciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Bao, Han P.; Freeman, William (Technical Monitor)

    2002-01-01

    For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to

  12. Optimizing modelling in iterative image reconstruction for preclinical pinhole PET

    NASA Astrophysics Data System (ADS)

    Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.

    2016-05-01

    The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.

  13. Optimizing modelling in iterative image reconstruction for preclinical pinhole PET.

    PubMed

    Goorden, Marlies C; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J

    2016-05-21

    The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning (99m)Tc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes ('multiple-pinhole paths' (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging (18)F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport. PMID:27082049

  14. On the model-based optimization of secreting mammalian cell (GS-NS0) cultures.

    PubMed

    Kiparissides, A; Pistikopoulos, E N; Mantalaris, A

    2015-03-01

    The global bio-manufacturing industry requires improved process efficiency to satisfy the increasing demands for biochemicals, biofuels, and biologics. The use of model-based techniques can facilitate the reduction of unnecessary experimentation and reduce labor and operating costs by identifying the most informative experiments and providing strategies to optimize the bioprocess at hand. Herein, we investigate the potential of a research methodology that combines model development, parameter estimation, global sensitivity analysis, and selection of optimal feeding policies via dynamic optimization methods to improve the efficiency of an industrially relevant bioprocess. Data from a set of batch experiments was used to estimate values for the parameters of an unstructured model describing monoclonal antibody (mAb) production in GS-NS0 cell cultures. Global Sensitivity Analysis (GSA) highlighted parameters with a strong effect on the model output and data from a fed-batch experiment were used to refine their estimated values. Model-based optimization was used to identify a feeding regime that maximized final mAb titer. An independent fed-batch experiment was conducted to validate both the results of the optimization and the predictive capabilities of the developed model. The successful integration of wet-lab experimentation and mathematical model development, analysis, and optimization represents a unique, novel, and interdisciplinary approach that addresses the complicated research and industrial problem of model-based optimization of cell based processes.

  15. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way. PMID:26497359

  16. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  17. Towards the geometric optimization of potential field models - A new spatial operator tool and applications

    NASA Astrophysics Data System (ADS)

    Haase, Claudia; Götze, Hans-Jürgen

    2014-05-01

    We present a new method for automated geometric modifications of potential field models. Computational developments and the increasing amount of available potential field data, especially gradient data from the satellite missions, lead to increasingly complex models and integrated modelling tools. Editing of these models becomes more difficult. Our approach presents an optimization tool that is designed to modify vertex-based model geometries (e.g. polygons, polyhedrons, triangulated surfaces) by applying spatial operators to the model that use an adaptive, on-the-fly model discretization. These operators deform the existing model via vertex-dragging, aiming at a minimized misfit between measured and modelled potential field anomaly. The parameters that define the operators are subject to an optimization process. This kind of parametrization provides a means for the reduction of unknowns (dimensionality of the search space), allows a variety of possible modifications and ensures that geometries are not destroyed by crossing polygon lines or punctured planes. We implemented a particle swarm optimization as a global searcher with restart option for the task of finding optimal operator parameters. This approach provides us with an ensemble of model solutions that allows a selection and geologically reasonable interpretations. The applicability of the tool is demonstrated in two 2D case studies that provide models of different extent and with different objectives. The first model is a synthetic salt structure in a horizontally layered background model. Expected geometry modifications are considerably small and localized and the initial models contain rather little information on the intended salt structure. A large scale example is given in the second study. Here, the optimization is applied to a sedimentary basin model that is based on seismic interpretation. With the aim to evaluate the seismically derived model, large scale operators are applied that mainly cause

  18. Optimized continuous pharmaceutical manufacturing via model-predictive control.

    PubMed

    Rehrl, Jakob; Kruisz, Julia; Sacher, Stephan; Khinast, Johannes; Horn, Martin

    2016-08-20

    This paper demonstrates the application of model-predictive control to a feeding blending unit used in continuous pharmaceutical manufacturing. The goal of this contribution is, on the one hand, to highlight the advantages of the proposed concept compared to conventional PI-controllers, and, on the other hand, to present a step-by-step guide for controller synthesis. The derivation of the required mathematical plant model is given in detail and all the steps required to develop a model-predictive controller are shown. Compared to conventional concepts, the proposed approach allows to conveniently consider constraints (e.g. mass hold-up in the blender) and offers a straightforward, easy to tune controller setup. The concept is implemented in a simulation environment. In order to realize it on a real system, additional aspects (e.g., state estimation, measurement equipment) will have to be investigated. PMID:27317987

  19. Reproducing Phenomenology of Peroxidation Kinetics via Model Optimization

    NASA Astrophysics Data System (ADS)

    Ruslanov, Anatole D.; Bashylau, Anton V.

    2010-06-01

    We studied mathematical modeling of lipid peroxidation using a biochemical model system of iron (II)-ascorbate-dependent lipid peroxidation of rat hepatocyte mitochondrial fractions. We found that antioxidants extracted from plants demonstrate a high intensity of peroxidation inhibition. We simplified the system of differential equations that describes the kinetics of the mathematical model to a first order equation, which can be solved analytically. Moreover, we endeavor to algorithmically and heuristically recreate the processes and construct an environment that closely resembles the corresponding natural system. Our results demonstrate that it is possible to theoretically predict both the kinetics of oxidation and the intensity of inhibition without resorting to analytical and biochemical research, which is important for cost-effective discovery and development of medical agents with antioxidant action from the medicinal plants.

  20. Optimized continuous pharmaceutical manufacturing via model-predictive control.

    PubMed

    Rehrl, Jakob; Kruisz, Julia; Sacher, Stephan; Khinast, Johannes; Horn, Martin

    2016-08-20

    This paper demonstrates the application of model-predictive control to a feeding blending unit used in continuous pharmaceutical manufacturing. The goal of this contribution is, on the one hand, to highlight the advantages of the proposed concept compared to conventional PI-controllers, and, on the other hand, to present a step-by-step guide for controller synthesis. The derivation of the required mathematical plant model is given in detail and all the steps required to develop a model-predictive controller are shown. Compared to conventional concepts, the proposed approach allows to conveniently consider constraints (e.g. mass hold-up in the blender) and offers a straightforward, easy to tune controller setup. The concept is implemented in a simulation environment. In order to realize it on a real system, additional aspects (e.g., state estimation, measurement equipment) will have to be investigated.

  1. Optimal Estimation of Phenological Crop Model Parameters for Rice (Oryza sativa)

    NASA Astrophysics Data System (ADS)

    Sharifi, H.; Hijmans, R. J.; Espe, M.; Hill, J. E.; Linquist, B.

    2015-12-01

    Crop phenology models are important components of crop growth models. In the case of phenology models, generally only a few parameters are calibrated and default cardinal temperatures are used which can lead to a temperature-dependent systematic phenology prediction error. Our objective was to evaluate different optimization approaches in the Oryza2000 and CERES-Rice phenology sub-models to assess the importance of optimizing cardinal temperatures on model performance and systematic error. We used two optimization approaches: the typical single-stage (planting to heading) and three-stage model optimization (for planting to panicle initiation (PI), PI to heading (HD), and HD to physiological maturity (MT)) to simultaneously optimize all model parameters. Data for this study was collected over three years and six locations on seven California rice cultivars. A temperature-dependent systematic error was found for all cultivars and stages, however it was generally small (systematic error < 2.2). Both optimization approaches in both models resulted in only small changes in cardinal temperature relative to the default values and thus optimization of cardinal temperatures did not affect systematic error or model performance. Compared to single stage optimization, three-stage optimization had little effect on determining time to PI or HD but significantly improved the precision in determining the time from HD to MT: the RMSE reduced from an average of 6 to 3.3 in Oryza2000 and from 6.6 to 3.8 in CERES-Rice. With regards to systematic error, we found a trade-off between RMSE and systematic error when optimization objective set to minimize RMSE or systematic error. Therefore, it is important to find the limits within which the trade-offs between RMSE and systematic error are acceptable, especially in climate change studies where this can prevent erroneous conclusions.

  2. The Search for "Optimal" Cutoff Properties: Fit Index Criteria in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Sivo, Stephen A.; Xitao, Fan; Witta, E. Lea; Willse, John T.

    2006-01-01

    This study is a partial replication of L. Hu and P. M. Bentler's (1999) fit criteria work. The purpose of this study was twofold: (a) to determine whether cut-off values vary according to which model is the true population model for a dataset and (b) to identify which of 13 fit indexes behave optimally by retaining all of the correct models while…

  3. Optimizing replacement of dairy cows: modeling the effects of diseases.

    PubMed

    Gröhn, Y T; Rajala-Schultz, P J; Allore, H G; DeLorenzo, M A; Hertl, J A; Galligan, D T

    2003-09-30

    We modified an existing dairy management decision model by including economically important dairy cattle diseases, and illustrated how their inclusion changed culling recommendations. Nine common diseases having treatment and veterinary costs, and affecting milk yield, fertility and survival, were considered important in the culling decision process. A sequence of stages was established during which diseases were considered significant: mastitis and lameness, any time during lactation; dystocia, milk fever and retained placenta, 0-4 days of lactation; displaced abomasum, 5-30 days; ketosis and metritis, 5-60 days; and cystic ovaries, 61-120 days. Some diseases were risk factors for others. Baseline incidences and disease effects were obtained from the literature. The effects of various disease combinations on milk yield, fertility, survival and economics were estimated. Adding diseases into the model did not increase voluntary or total culling rate. However, diseased animals were recommended for culling much more than healthy cows, regardless of parity or production level. Cows in the highest production level were not recommended for culling even if they contracted a disease. The annuity per cow decreased and herdlife increased when diseases were in the model. Higher replacement cost also increased herdlife and decreased when diseases were in the model. Higher replacement cost also increased herdlife and decreased the annuity and voluntary culling rate.

  4. Optimizing technology investments: a broad mission model approach

    NASA Technical Reports Server (NTRS)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  5. Optimized transformation of the glottal motion into a mechanical model.

    PubMed

    Triep, M; Brücker, C; Stingl, M; Döllinger, M

    2011-03-01

    During phonation the human vocal folds exhibit a complex self-sustained oscillation which is a result of the transglottic pressure difference, of the characteristics of the tissue of the folds and of the flow in the gap between the vocal folds (Van den Berg J. Myoelastic-aerodynamic theory of voice production. J Speech Hearing Res 1958;1:227-44 [1]). Obviously, extensive experiments cannot be performed in vivo. Therefore, in literature a variety of model experiments that try to replicate the vocal folds kinematics for specific studies within the vocal tract can be found. Here, we present an experimental model to visualize the fluid dynamics which result from the complex motions of real human vocal folds. An existing up-scaled glottal cam model with approximate glottal kinematics is extended to replicate more realistically observed glottal closure types. This extension of the model is a further step in understanding the fluid dynamical mechanisms contributing to the quality of human voice during phonation, in particular the cause (changed glottal kinematics) and its effect (changed aero-acoustic field). For four typical glottal closure types cam geometries of varying profile are generated. Two counter rotating cams covered with a silicone membrane reproduce as well as possible the observed glottal movements.

  6. Building Restoration Operations Optimization Model Beta Version 1.0

    2007-05-31

    The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOM’s integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are criticalmore » to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated

  7. Building Restoration Operations Optimization Model Beta Version 1.0

    SciTech Connect

    2007-05-31

    The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOM’s integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are critical to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated laser

  8. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model

    PubMed Central

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V.

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139

  9. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139

  10. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  11. Comparison of optimization methods for the hyperspectral semi-analytical model

    NASA Astrophysics Data System (ADS)

    Du, KePing; Xi, Ying; Sun, LiRan; Zhang, Xuegang

    2009-01-01

    During recent years, more and more efforts have been focused on developing new models based on ocean optics theory to retrieve water's bio-geo-chemical parameters or inherent optical properties (IOPs) from either ocean color imagery or in situ measurements. Basically, these models are sophisticated, and hard to invert directly, look up table (LUT) technique or optimization methods are employed to retrieve the unknown parameters, e.g., chlorophyll concentration, CDOM absorption, etc. Many researches prefer to use time-consuming global optimization methods, e.g., genetic or evolutionary algorithm, etc. In this study, different optimization methods, smooth nonlinear optimization (NLP), global optimization (GO), nonsmooth optimization (NSP), are compared based on the sophisticated hyper-spectral semianalytical (SA) algorithm developed by Lee et al., retrieval accuracy and performance are evaluated. It is found that retrieval accuracy don't have much difference, the performance difference, however, is much larger, NLP works very well for the SA model. For a given model, it is better to analyze the model is linear, nonlinear or nonsmooth category problem, sometimes, convex also need to be determined, or linearize some nonsmooth problem caused by if decision, then select the corresponding category optimization methods. Initial values selection is a big issue for optimization, the simple statistical models (e.g., OC2 or OC4) are used to retrieve the unknowns as initial values.

  12. Stochastic optimization for modeling physiological time series: application to the heart rate response to exercise

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Stirling, J. R.

    2007-01-01

    Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.

  13. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    PubMed

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786

  14. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region

    PubMed Central

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786

  15. Delay Differential Model for Tumour-Immune Response with Chemoimmunotherapy and Optimal Control

    PubMed Central

    Rihan, F. A.; Abdelrahman, D. H.; Al-Maskari, F.; Ibrahim, F.; Abdeen, M. A.

    2014-01-01

    We present a delay differential model with optimal control that describes the interactions of the tumour cells and immune response cells with external therapy. The intracellular delay is incorporated into the model to justify the time required to stimulate the effector cells. The optimal control variables are incorporated to identify the best treatment strategy with minimum side effects by blocking the production of new tumour cells and keeping the number of normal cells above 75% of its carrying capacity. Existence of the optimal control pair and optimality system are established. Pontryagin's maximum principle is applicable to characterize the optimal controls. The model displays a tumour-free steady state and up to three coexisting steady states. The numerical results show that the optimal treatment strategies reduce the tumour cells load and increase the effector cells after a few days of therapy. The performance of combination therapy protocol of immunochemotherapy is better than the standard protocol of chemotherapy alone. PMID:25197319

  16. A method for generating numerical pilot opinion ratings using the optimal pilot model

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1976-01-01

    A method for generating numerical pilot opinion ratings using the optimal pilot model is introduced. The method is contained in a rating hypothesis which states that the numerical rating which a human pilot assigns to a specific vehicle and task can be directly related to the numerical value of the index of performance resulting from the optimal pilot modeling procedure as applied to that vehicle and task. The hypothesis is tested using the data from four piloted simulations. The results indicate that the hypothesis is reasonable, but that the predictive capability of the method is a strong function of the accuracy of the pilot model itself. This accuracy is, in turn, dependent upon the parameters which define the optimal modeling problem. A procedure for specifying the parameters for the optimal pilot model in the absence of experimental data is suggested.

  17. Thermal modeling of grinding for process optimization and durability improvements

    NASA Astrophysics Data System (ADS)

    Hanna, Ihab M.

    Both thermal and mechanical aspects of the grinding process are investigated in detail in an effort to predict grinding induced residual stresses. An existing thermal model is used as a foundation for computing heat partitions and temperatures in surface grinding. By numerically processing data from IR temperature measurements of the grinding zone; characterizations are made of the grinding zone heat flux. It is concluded that the typical heat flux profile in the grinding zone is triangular in shape, supporting this often used assumption found in the literature. Further analyses of the computed heat flux profiles has revealed that actual grinding zone contact lengths exceed geometric contact lengths by an average of 57% for the cases considered. By integrating the resulting heat flux profiles; workpiece energy partitions are computed for several cases of dry conventional grinding of hardened steel. The average workpiece energy partition for the cases considered was 37%. In an effort to more accurately predict grinding zone temperatures and heat fluxes, refinements are made to the existing thermal model. These include consideration of contact length extensions due to local elastic deformations, variations of the assumed contact area ratio as a function of grinding process parameters, consideration of coolant latent heat of vaporization and its effect on heat transfer beyond the coolant boiling point, and incorporation of coolant-workpiece convective heat flux effects outside the grinding zone. The result of the model refinements accounting for contact length extensions and process-dependant contact area ratios is excellent agreement with IR temperature measurements over a wide range of grinding conditions. By accounting for latent heat of vaporization effects, grinding zone temperature profiles are shown to be capable of reproducing measured profiles found in the literature for cases on the verge of thermal surge conditions. Computed peak grinding zone temperatures

  18. Using models for the optimization of hydrologic monitoring

    USGS Publications Warehouse

    Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.

    2011-01-01

    Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The

  19. Environmental optimal control strategies based on plant canopy photosynthesis responses and greenhouse climate model

    NASA Astrophysics Data System (ADS)

    Deng, Lujuan; Xie, Songhe; Cui, Jiantao; Liu, Tao

    2006-11-01

    It is the essential goal of intelligent greenhouse environment optimal control to enhance income of cropper and energy save. There were some characteristics such as uncertainty, imprecision, nonlinear, strong coupling, bigger inertia and different time scale in greenhouse environment control system. So greenhouse environment optimal control was not easy and especially model-based optimal control method was more difficult. So the optimal control problem of plant environment in intelligent greenhouse was researched. Hierarchical greenhouse environment control system was constructed. In the first level data measuring was carried out and executive machine was controlled. Optimal setting points of climate controlled variable in greenhouse was calculated and chosen in the second level. Market analysis and planning were completed in third level. The problem of the optimal setting point was discussed in this paper. Firstly the model of plant canopy photosynthesis responses and the model of greenhouse climate model were constructed. Afterwards according to experience of the planting expert, in daytime the optimal goals were decided according to the most maximal photosynthesis rate principle. In nighttime on plant better growth conditions the optimal goals were decided by energy saving principle. Whereafter environment optimal control setting points were computed by GA. Compared the optimal result and recording data in real system, the method is reasonable and can achieve energy saving and the maximal photosynthesis rate in intelligent greenhouse

  20. Identifying Ensembles of Signal Transduction Models using Pareto Optimal Ensemble Techniques (POETs)

    PubMed Central

    Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D.

    2010-01-01

    Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass action kinetics within an ordinary differential equation (ODE) framework (64-ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information. PMID:20665647

  1. Ensembles of signal transduction models using Pareto Optimal Ensemble Techniques (POETs).

    PubMed

    Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D

    2010-07-01

    Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass-action kinetics within an ordinary differential equation (ODE) framework (64 ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information.

  2. Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.

    PubMed

    Fintelman, D M; Sterling, M; Hemida, H; Li, F-X

    2014-06-01

    The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position.

  3. Mathematical modeling and optimization of cellulase protein production using Trichoderma reesei RL-P37

    SciTech Connect

    Tholudur, A.; Ramirez, W.F.; McMillan, J.D.

    1999-07-01

    The enzyme cellulase, a multienzyme complex made up of several proteins, catalyzes the conversion of cellulose to glucose in an enzymatic hydrolysis-based biomass-to-ethanol process. Production of cellulase enzyme proteins in large quantities using the fungus Trichoderma reesei requires understanding the dynamics of growth and enzyme production. The method of neural network parameter function modeling, which combines the approximation capabilities of neural networks with fundamental process knowledge, is utilized to develop a mathematical model of this dynamic system. In addition, kinetic models are also developed. Laboratory data from bench-scale fermentations involving growth and protein production by T. reesei on lactose and xylose are used to estimate the parameters in these models. The relative performance of the various models and the results of optimizing these models on two different performance measures are presented. An approximately 33% lower root-mean-squared error (RMSE) in protein predictions and about 40% lower total RMSE is obtained with the neural network-based model, the RMSE in predicting optimal conditions for two performance indices, is about 67% and 40% lower, respectively, when compared with the kinetic models. Thus, both model predictions and optimization results from the neural network-based model are found to be closer to the experimental data than the kinetic models developed in this work. It is shown that the neural network parameter function modeling method can be useful as a macromodeling technique to rapidly develop dynamic models of a process.

  4. The application of temporal difference learning in optimal diet models.

    PubMed

    Teichmann, Jan; Broom, Mark; Alonso, Eduardo

    2014-01-01

    An experience-based aversive learning model of foraging behaviour in uncertain environments is presented. We use Q-learning as a model-free implementation of Temporal difference learning motivated by growing evidence for neural correlates in natural reinforcement settings. The predator has the choice of including an aposematic prey in its diet or to forage on alternative food sources. We show how the predator's foraging behaviour and energy intake depend on toxicity of the defended prey and the presence of Batesian mimics. We introduce the precondition of exploration of the action space for successful aversion formation and show how it predicts foraging behaviour in the presence of conflicting rewards which is conditionally suboptimal in a fixed environment but allows better adaptation in changing environments. PMID:24036204

  5. Optimization of a wearable pudendal nerve stimulator using computational models.

    PubMed

    Shiraz, Arsam N; Leaker, Brian; Demosthenous, Andreas

    2015-01-01

    After spinal cord injury, lower urinary tract functions may be disrupted. Trans-rectal stimulation of the pudendal nerve may enable patients to regain these functions via minimally invasive means. Using a finite element model of a wearable trans-rectal stimulator in the pelvic region, and a computational model of mammalian nerve fiber, various electrode configurations and the corresponding required current levels were studied. A configuration requiring considerably lower current level than previously reported was identified. For this configuration, the strength-duration curve was simulated and the effect of different stimulus waveforms on the required current was studied. In addition, the study examined whether a multi-electrode device could selectively activate different terminal branches of the pudendal nerve.

  6. Optimization of a wearable pudendal nerve stimulator using computational models.

    PubMed

    Shiraz, Arsam N; Leaker, Brian; Demosthenous, Andreas

    2015-01-01

    After spinal cord injury, lower urinary tract functions may be disrupted. Trans-rectal stimulation of the pudendal nerve may enable patients to regain these functions via minimally invasive means. Using a finite element model of a wearable trans-rectal stimulator in the pelvic region, and a computational model of mammalian nerve fiber, various electrode configurations and the corresponding required current levels were studied. A configuration requiring considerably lower current level than previously reported was identified. For this configuration, the strength-duration curve was simulated and the effect of different stimulus waveforms on the required current was studied. In addition, the study examined whether a multi-electrode device could selectively activate different terminal branches of the pudendal nerve. PMID:26737021

  7. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  8. Global stability and optimal control of an SIRS epidemic model on heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Chen, Lijuan; Sun, Jitao

    2014-09-01

    In this paper, we consider an SIRS epidemic model with vaccination on heterogeneous networks. By constructing suitable Lyapunov functions, global stability of the disease-free equilibrium and the endemic equilibrium of the model is investigated. Also we firstly study an optimally controlled SIRS epidemic model on complex networks. We show that an optimal control exists for the control problem. Finally some examples are presented to show the global stability and the efficiency of this optimal control. These results can help in adopting pragmatic treatment upon diseases in structured populations.

  9. Study on modeling of multispectral emissivity and optimization algorithm.

    PubMed

    Yang, Chunling; Yu, Yong; Zhao, Dongyang; Zhao, Guoliang

    2006-01-01

    Target's spectral emissivity changes variously, and how to obtain target's continuous spectral emissivity is a difficult problem to be well solved nowadays. In this letter, an activation-function-tunable neural network is established, and a multistep searching method which can be used to train the model is proposed. The proposed method can effectively calculate the object's continuous spectral emissivity from the multispectral radiation information. It is a universal method, which can be used to realize on-line emissivity demarcation.

  10. Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan

    NASA Astrophysics Data System (ADS)

    Huang, P. S.; Chiu, Y.

    2015-12-01

    In 1970's, the agriculture and aquaculture were rapidly developed at Pingtung coastal area in southern Taiwan. The groundwater aquifers were over-pumped and caused the seawater intrusion. In order to remedy the contaminated groundwater and find the best strategies of groundwater usage, a management model to search the optimal groundwater operational strategies is developed in this study. The objective function is to minimize the total amount of injection water and a set of constraints are applied to ensure the groundwater levels and concentrations are satisfied. A three-dimension density-dependent flow and transport simulation model, called SEAWAT developed by U.S. Geological Survey, is selected to simulate the phenomenon of seawater intrusion. The simulation model is well calibrated by the field measurements and replaced by the surrogate model of trained artificial neural networks (ANNs) to reduce the computational time. The ANNs are embedded in the management model to link the simulation and optimization models, and the global optimizer of differential evolution (DE) is applied for solving the management model. The optimal results show that the fully trained ANNs could substitute the original simulation model and reduce much computational time. Under appropriate setting of objective function and constraints, DE can find the optimal injection rates at predefined barriers. The concentrations at the target locations could decrease more than 50 percent within the planning horizon of 20 years. Keywords : Seawater intrusion, groundwater management, numerical model, artificial neural networks, differential evolution

  11. Vertical slot fishways: Mathematical modeling and optimal management

    NASA Astrophysics Data System (ADS)

    Alvarez-Vázquez, L. J.; Martínez, A.; Vázquez-Méndez, M. E.; Vilar, M. A.

    2008-09-01

    Fishways are the main type of hydraulic devices currently used to facilitate migration of fish past obstructions (dams, waterfalls, rapids,...) in rivers. In this paper we present a mathematical formulation of an optimal control problem related to the optimal management of a vertical slot fishway, where the state system is given by the shallow water equations, the control is the flux of inflow water, and the cost function reflects the need of rest areas for fish and of a water velocity suitable for fish leaping and swimming capabilities. We give a first-order optimality condition for characterizing the optimal solutions of this problem. From a numerical point of view, we use a characteristic-Galerkin method for solving the shallow water equations, and we use an optimization algorithm for the computation of the optimal control. Finally, we present numerical results obtained for the realistic case of a standard nine pools fishway.

  12. Engineering models for merging wakes in wind farm optimization applications

    NASA Astrophysics Data System (ADS)

    Machefaux, E.; Larsen, G. C.; Murcia Leon, J. P.

    2015-06-01

    The present paper deals with validation of 4 different engineering wake superposition approaches against detailed CFD simulations and covering different turbine interspacing, ambient turbulence intensities and mean wind speeds. The first engineering model is a simple linear superposition of wake deficits as applied in e.g. Fuga. The second approach is the square root of sums of squares approach, which is applied in the widely used PARK program. The third approach, which is presently used with the Dynamic Wake Meandering (DWM) model, assumes that the wake affected downstream flow field to be determined by a superposition of the ambient flow field and the dominating wake among contributions from all upstream turbines at any spatial position and at any time. The last approach developed by G.C. Larsen is a newly developed model based on a parabolic type of approach, which combines wake deficits successively. The study indicates that wake interaction depends strongly on the relative wake deficit magnitude, i.e. the deficit magnitude normalized with respect to the ambient mean wind speed, and that the dominant wake assumption within the DWM framework is the most accurate.

  13. CVXPY: A Python-Embedded Modeling Language for Convex Optimization

    PubMed Central

    Diamond, Steven; Boyd, Stephen

    2016-01-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples. PMID:27375369

  14. Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Alder, J.; van Griensven, A.; Meixner, T.

    2003-12-01

    Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.

  15. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  16. Combining multi-objective optimization and bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    SciTech Connect

    Vrugt, Jasper A; Wohling, Thomas

    2008-01-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  17. Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Safi-Harb, Samar; Fiege, Jason

    2015-08-01

    The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.

  18. Continuously Optimized Reliable Energy (CORE) Microgrid: Models & Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2013-07-01

    This brochure describes Continuously Optimized Reliable Energy (CORE), a trademarked process NREL employs to produce conceptual microgrid designs. This systems-based process enables designs to be optimized for economic value, energy surety, and sustainability. Capabilities NREL offers in support of microgrid design are explained.

  19. Review: Simulation-optimization models for the management and monitoring of coastal aquifers

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Datta, Bithin

    2015-09-01

    The literature on the application of simulation-optimization approaches for management and monitoring of coastal aquifers is reviewed. Both sharp- and dispersive-interface modeling approaches have been applied in conjunction with optimization algorithms in the past to develop management solutions for saltwater intrusion. Simulation-optimization models based on sharp-interface approximation are often based on the Ghyben-Herzberg relationship and provide an efficient framework for preliminary designs of saltwater-intrusion management schemes. Models based on dispersive-interface numerical models have wider applicability but are challenged by the computational burden involved when applied in the simulation-optimization framework. The use of surrogate models to substitute the physically based model during optimization has been found to be successful in many cases. Scalability is still a challenge for the surrogate modeling approach as the computational advantage accrued is traded-off with the training time required for the surrogate models as the problem size increases. Few studies have attempted to solve stochastic coastal-aquifer management problems considering model prediction uncertainty. Approaches that have been reported in the wider groundwater management literature need to be extended and adapted to address the challenges posed by the stochastic coastal-aquifer management problem. Similarly, while abundant literature is available on simulation-optimization methods for the optimal design of groundwater monitoring networks, applications targeting coastal aquifer systems are rare. Methods to optimize compliance monitoring strategies for coastal aquifers need to be developed considering the importance of monitoring feedback information in improving the management strategies.

  20. Optimization of global model composed of radial basis functions using the term-ranking approach

    SciTech Connect

    Cai, Peng; Tao, Chao Liu, Xiao-Jun

    2014-03-15

    A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.

  1. Modeling hydrogen diffusion for solar cell passivation and process optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Yi

    A diffusion model for hydrogen (H) in crystalline silicon was established which takes into account the charged state conversion, junction field, mobile traps, and complex formation and dissociation at dopant and trap sites. Carrier exchange among the various charged species is a "fast" process compared to the diffusion process. A numerical method was developed to solve the densities of various charged species from the Poisson's equation that involves shallow-level dopants and one "negative U" impurity, e.g., H. Time domain implicit method was adopted in finite difference scheme to solve the fully coupled equations. Limiting versions of the model were applied to the problems that are of interest to photovoltaics. Simplified trap-limited model was used to describe the low temperature diffusion profiles, assuming process-induced traps, a constant bulk trap level, and trapping/detrapping mechanisms. The results of the simulation agreed with those obtained from experiments. The best fit yielded a low surface free H concentration, Cs, (˜10 14 cm-3) from high temperature extrapolated diffusivity value. In the case of ion beam hydrogenation, mobile traps needed to be considered. PAS analysis showed the existence of vacancy-type defects in implanted Si substrates. Simulation of hydrogen diffusion in p-n junction was first attempted in this work. The order of magnitude of Cs (˜10 14 cm-3) was confirmed. Simulation results showed that the preferred charged state of H is H- (H +) in n- (p-) side of the junction. The accumulation of H- (H+) species on n+ (p+) side of the n+-p (p+-n) junction was observed, which could retard the diffusion in junction. The diffusion of hydrogen through heavily doped region in a junction is trap-limited. Several popular hydrogenation techniques were evaluated by means of modeling and experimental observations. In particular, PECVD followed by RTP hydrogenation was found to be two-step process: PECVD deposition serves as a predeposition step of H

  2. Optimization of the high-shear wet granulation wetting process using fuzzy logic modeling.

    PubMed

    Belohlav, Zdenek; Brenkova, Lucie; Kalcikova, Jana; Hanika, Jiri; Durdil, Petr; Tomasek, Vaclav; Palatova, Marta

    2007-01-01

    A fuzzy model has been developed for the optimization of high-shear wet granulation wetting on a plant scale depending on the characteristics of pharmaceutical active substance particles. The model optimized on the basis of experimental data involves a set of rules obtained from expert knowledge and full-scale process data. The skewness coefficient of particle size distribution and the tapped density of the granulated mixture were chosen as the model input variables. The output of the fuzzy ruled system is the optimal quantity of wetting liquid. In comparison to manufacturing practice, a very strong sensitivity of the optimal quantity of the added wetting liquid to the size and shape of the active substance particles has been identified by fuzzy modeling. PMID:17763139

  3. An optimal spacecraft scheduling model for the NASA deep space network

    NASA Technical Reports Server (NTRS)

    Webb, W. A.

    1985-01-01

    A computer model is described which uses mixed-integer linear programming to provide optimal DSN spacecraft schedules given a mission set and specified scheduling requirements. A solution technique is proposed which uses Bender's method and a heuristic starting algorithm.

  4. Ant colony optimization algorithm for continuous domains based on position distribution model of ant colony foraging.

    PubMed

    Liu, Liqiang; Dai, Yuntao; Gao, Jinyu

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm. PMID:24955402

  5. Ant Colony Optimization Algorithm for Continuous Domains Based on Position Distribution Model of Ant Colony Foraging

    PubMed Central

    Liu, Liqiang; Dai, Yuntao

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm. PMID:24955402

  6. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  7. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    PubMed Central

    He, Shi-wei; Song, Rui; Sun, Yang; Li, Hao-dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable. PMID:25435867

  8. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show

  9. Optimization of Experimental Design for Estimating Groundwater Pumping Using Model Reduction

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Cheng, W.; Yeh, W. W.

    2012-12-01

    An optimal experimental design algorithm is developed to choose locations for a network of observation wells for estimating unknown groundwater pumping rates in a confined aquifer. The design problem can be expressed as an optimization problem which employs a maximal information criterion to choose among competing designs subject to the specified design constraints. Because of the combinatorial search required in this optimization problem, given a realistic, large-scale groundwater model, the dimensionality of the optimal design problem becomes very large and can be difficult if not impossible to solve using mathematical programming techniques such as integer programming or the Simplex with relaxation. Global search techniques, such as Genetic Algorithms (GAs), can be used to solve this type of combinatorial optimization problem; however, because a GA requires an inordinately large number of calls of a groundwater model, this approach may still be infeasible to use to find the optimal design in a realistic groundwater model. Proper Orthogonal Decomposition (POD) is therefore applied to the groundwater model to reduce the model space and thereby reduce the computational burden of solving the optimization problem. Results for a one-dimensional test case show identical results among using GA, integer programming, and an exhaustive search demonstrating that GA is a valid method for use in a global optimum search and has potential for solving large-scale optimal design problems. Additionally, other results show that the algorithm using GA with POD model reduction is several orders of magnitude faster than an algorithm that employs GA without POD model reduction in terms of time required to find the optimal solution. Application of the proposed methodology is being made to a large-scale, real-world groundwater problem.

  10. When the Optimal Is Not the Best: Parameter Estimation in Complex Biological Models

    PubMed Central

    Fernández Slezak, Diego; Suárez, Cecilia; Cecchi, Guillermo A.; Marshall, Guillermo; Stolovitzky, Gustavo

    2010-01-01

    Background The vast computational resources that became available during the past decade enabled the development and simulation of increasingly complex mathematical models of cancer growth. These models typically involve many free parameters whose determination is a substantial obstacle to model development. Direct measurement of biochemical parameters in vivo is often difficult and sometimes impracticable, while fitting them under data-poor conditions may result in biologically implausible values. Results We discuss different methodological approaches to estimate parameters in complex biological models. We make use of the high computational power of the Blue Gene technology to perform an extensive study of the parameter space in a model of avascular tumor growth. We explicitly show that the landscape of the cost function used to optimize the model to the data has a very rugged surface in parameter space. This cost function has many local minima with unrealistic solutions, including the global minimum corresponding to the best fit. Conclusions The case studied in this paper shows one example in which model parameters that optimally fit the data are not necessarily the best ones from a biological point of view. To avoid force-fitting a model to a dataset, we propose that the best model parameters should be found by choosing, among suboptimal parameters, those that match criteria other than the ones used to fit the model. We also conclude that the model, data and optimization approach form a new complex system and point to the need of a theory that addresses this problem more generally. PMID:21049094

  11. Effects of noise variance model on optimal feedback design and actuator placement

    NASA Technical Reports Server (NTRS)

    Ruan, Mifang; Choudhury, Ajit K.

    1994-01-01

    In optimal placement of actuators for stochastic systems, it is commonly assumed that the actuator noise variances are not related to the feedback matrix and the actuator locations. In this paper, we will discuss the limitation of that assumption and develop a more practical noise variance model. Various properties associated with optimal actuator placement under the assumption of this noise variance model are discovered through the analytical study of a second order system.

  12. Modeling the BOD of Danube River in Serbia using spatial, temporal, and input variables optimized artificial neural network models.

    PubMed

    Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V

    2016-05-01

    This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.

  13. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results.

  14. Comparison of Various Optimization Methods for Calibration of Conceptual Rainfall-Runoff Models

    NASA Astrophysics Data System (ADS)

    Bhatt, Divya; Jain, Ashu

    2010-05-01

    Runoff forecasts are needed in many water resources activities such as flood and drought management, irrigation practices, and water distribution systems, etc. Runoff is generally forecasted using rainfall-runoff models by using hydrologic data in the catchment. Computer based hydrologic models have become popular with practicing hydrologists and water resources engineers for performing hydrologic forecasts and for managing water systems. Rainfall-runoff library (RRL) is computer software developed by Cooperative Research Centre for Catchment Hydrology (CRCCH), Australia. The RRL consists of five different conceptual rainfall-runoff models and has been in operation in many water resources applications in Australia. RRL is designed to simulate catchment runoff by using daily rainfall and evapotranspiration data. In this paper, the results from an investigation on the use of different optimization methods for the calibration of various conceptual rainfall-runoff models available in RRL toolkit are presented. Out of the five conceptual models in the RRL toolkit, AWBM (The Australian Water Balance Model) has been employed. Seven different optimization methods are investigated for the calibration of the AWBM model. The optimization methods investigated include uniform random sampling, pattern search, multi start pattern search, Rosenbrock search, Rosenbrock multi-start search, Shuffled Complex Evolution (SCE-UA) and Genetic Algorithm (GA). Trial and error procedures were employed to arrive at the best values of various parameters involved in the optimizers for all to develop the AWBM. The results obtained from the best configuration of the AWBM are presented here for all optimization methods. The daily rainfall and runoff data derived from Bird Creek Basin, Oklahoma, USA have been employed to develop all the models included here. A wide range of error statistics have been used to evaluate the performance of all the models developed in this study. It has been found that

  15. Shape Optimization and Supremal Minimization Approaches in Landslides Modeling

    SciTech Connect

    Hassani, Riad Ionescu, Ioan R. Lachand-Robert, Thomas

    2005-10-15

    The steady-state unidirectional (anti-plane) flow for a Bingham fluid is considered. We take into account the inhomogeneous yield limit of the fluid, which is well adjusted to the description of landslides. The blocking property is analyzed and we introduce the safety factor which is connected to two optimization problems in terms of velocities and stresses. Concerning the velocity analysis the minimum problem in Bv({omega}) is equivalent to a shape-optimization problem. The optimal set is the part of the land which slides whenever the loading parameter becomes greater than the safety factor. This is proved in the one-dimensional case and conjectured for the two-dimensional flow. For the stress-optimization problem we give a stream function formulation in order to deduce a minimum problem in W{sup 1,{infinity}}({omega}) and we prove the existence of a minimizer. The L{sup p}({omega}) approximation technique is used to get a sequence of minimum problems for smooth functionals. We propose two numerical approaches following the two analysis presented before.First, we describe a numerical method to compute the safety factor through equivalence with the shape-optimization problem.Then the finite-element approach and a Newton method is used to obtain a numerical scheme for the stress formulation. Some numerical results are given in order to compare the two methods. The shape-optimization method is sharp in detecting the sliding zones but the convergence is very sensitive to the choice of the parameters. The stress-optimization method is more robust, gives precise safety factors but the results cannot be easily compiled to obtain the sliding zone.

  16. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  17. Locating monitoring wells in groundwater systems using embedded optimization and simulation models.

    PubMed

    Bashi-Azghadi, Seyyed Nasser; Kerachian, Reza

    2010-04-15

    In this paper, a new methodology is proposed for optimally locating monitoring wells in groundwater systems in order to identify an unknown pollution source using monitoring data. The methodology is comprised of two different single and multi-objective optimization models, a Monte Carlo analysis, MODFLOW, MT3D groundwater quantity and quality simulation models and a Probabilistic Support Vector Machine (PSVM). The single-objective optimization model, which uses the results of the Monte Carlo analysis and maximizes the reliability of contamination detection, provides the initial location of monitoring wells. The objective functions of the multi-objective optimization model are minimizing the monitoring cost, i.e. the number of monitoring wells, maximizing the reliability of contamination detection and maximizing the probability of detecting an unknown pollution source. The PSVMs are calibrated and verified using the results of the single-objective optimization model and the Monte Carlo analysis. Then, the PSVMs are linked with the multi-objective optimization model, which maximizes both the reliability of contamination detection and probability of detecting an unknown pollution source. To evaluate the efficiency and applicability of the proposed methodology, it is applied to Tehran Refinery in Iran.

  18. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  19. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  20. MOGO: Model-Oriented Global Optimization of Petascale Applications

    SciTech Connect

    Malony, Allen D.; Shende, Sameer S.

    2012-09-14

    The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.

  1. A comparison between gradient descent and stochastic approaches for parameter optimization of a sea ice model

    NASA Astrophysics Data System (ADS)

    Sumata, H.; Kauker, F.; Gerdes, R.; Köberle, C.; Karcher, M.

    2013-07-01

    Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean-sea ice model of the Arctic, and applicability and efficiency of the respective methods were examined. One optimization utilizes a finite difference (FD) method based on a traditional gradient descent approach, while the other adopts a micro-genetic algorithm (μGA) as an example of a stochastic approach. The optimizations were performed by minimizing a cost function composed of model-data misfit of ice concentration, ice drift velocity and ice thickness. A series of optimizations were conducted that differ in the model formulation ("smoothed code" versus standard code) with respect to the FD method and in the population size and number of possibilities with respect to the μGA method. The FD method fails to estimate optimal parameters due to the ill-shaped nature of the cost function caused by the strong non-linearity of the system, whereas the genetic algorithms can effectively estimate near optimal parameters. The results of the study indicate that the sophisticated stochastic approach (μGA) is of practical use for parameter optimization of a coupled ocean-sea ice model with a medium-sized horizontal resolution of 50 km × 50 km as used in this study.

  2. What's in a Grammar? Modeling Dominance and Optimization in Contact

    ERIC Educational Resources Information Center

    Sharma, Devyani

    2013-01-01

    Muysken's article is a timely call for us to seek deeper regularities in the bewildering diversity of language contact outcomes. His model provocatively suggests that most such outcomes can be subsumed under four speaker optimization strategies. I consider two aspects of the proposal here: the formalization in Optimality Theory (OT) and the…

  3. FinFET Doping; Material Science, Metrology, and Process Modeling Studies for Optimized Device Performance

    SciTech Connect

    Duffy, R.; Shayesteh, M.

    2011-01-07

    In this review paper the challenges that face doping optimization in 3-dimensional (3D) thin-body silicon devices will be discussed, within the context of material science studies, metrology methodologies, process modeling insight, ultimately leading to optimized device performance. The focus will be on ion implantation at the method to introduce the dopants to the target material.

  4. Model-based treatment optimization of a novel VEGFR inhibitor

    PubMed Central

    Keizer, Ron J.; Gupta, Anubha; Shumaker, Robert; Beijnen, Jos H.; Schellens, Jan H. M.; Huitema, Alwin D. R.

    2012-01-01

    AIM To evaluate dosing and intervention strategies for the phase II programme of a VEGF receptor inhibitor using PK–PD modelling and simulation, with the aim of maximizing (i) the number of patients on treatment and (ii) the average dose level during treatment. METHODS A previously developed PK–PD model for lenvatinib (E7080) was updated and parameters were re-estimated (141 patients, once daily and twice daily regimens). Treatment of lenvatinib was simulated for 16 weeks, initiated at 25 mg once daily. Outcome measures included the number of patients on treatment and overall drug exposure. A hypertension intervention design proposed for phase II studies was evaluated, including antihypertensive treatment and dose de-escalation. Additionally, a within-patient dose escalation was investigated, titrating up to 50 mg once daily unless unacceptable toxicity occurred. RESULTS Using the proposed antihypertension intervention design, 82% of patients could remain on treatment, and the mean dose administered was 21.5 mg day−1. The adverse event (AE) guided dose titration increased the average dose by 4.6 mg day−1, while only marginally increasing the percentage of patients dropping out due to toxicity (from 18% to 20.8%). CONCLUSIONS The proposed hypertension intervention design is expected to be effective in maintaining patients on treatment with lenvatinib. The AE-guided dose titration with blood pressure as a biomarker yielded a higher overall dose level, without relevant increases in toxicity. Since increased exposure to lenvatinib seems correlated with increased treatment efficacy, the adaptive treatment design may thus be a valid approach to improve treatment outcome. PMID:22295876

  5. Conceptual modeling to optimize the haul and transfer of municipal solid waste.

    PubMed

    Komilis, D P

    2008-11-01

    Two conceptual mixed integer linear optimization models were developed to optimize the haul and transfer of municipal solid waste (MSW) prior to landfilling. One model is based on minimizing time (h/d), whilst the second model is based on minimizing total cost (euro/d). Both models aim to calculate the optimum pathway to haul MSW from source nodes (waste production nodes, such as urban centers or municipalities) to sink nodes (landfills) via intermediate nodes (waste transfer stations). The models are applicable provided that the locations of the source, intermediate and sink nodes are fixed. The basic input data are distances among nodes, average vehicle speeds, haul cost coefficients (in euro/ton km), equipment and facilities' operating and investment cost, labor cost and tipping fees. The time based optimization model is easier to develop, since it is based on readily available data (distances among nodes). It can be used in cases in which no transfer stations are included in the system. The cost optimization model is more reliable compared to the time model provided that accurate cost data are available. The cost optimization model can be a useful tool to optimally allocate waste transfer stations in a region and can aid a community to investigate the threshold distance to a landfill above which the construction of a transfer station becomes financially beneficial. A sensitivity analysis reveals that queue times at the landfill or at the waste transfer station are key input variables. In addition, the waste transfer station ownership and the initial cost data affect the optimum path. A case study at the Municipality of Athens is used to illustrate the presented models.

  6. Model reduction using new optimal Routh approximant technique

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Guo, Tong-Yi; Sheih, Leang-San

    1992-01-01

    An optimal Routh approximant of a single-input single-output dynamic system is a reduced-order transfer function of which the denominator is obtained by the Routh approximation method while the numerator is determined by minimizing a time-response integral-squared-error (ISE) criterion. In this paper, a new elegant approach is presented for obtaining the optimal Routh approximants for linear time-invariant continuous-time systems. The approach is based on the Routh canonical expansion, which is a finite-term orthogonal series of rational basis functions, and minimization of the ISE criterion. A procedure for combining the above approach with the bilinear transformation is also presented in order to obtain the optimal bilinear Routh approximants of linear time-invariant discrete-time systems. The proposed technique is simple in formulation and is amenable to practical implementation.

  7. Mixed integer model for optimizing equipment scheduling and overburden transport in a surface coal mining operation

    SciTech Connect

    Goodman, G.V.R.

    1987-01-01

    The lack of available techniques prompted the development of a mixed integer model to optimize the scheduling of equipment and the distribution of overburden in a typical mountaintop removal operation. Using this format, a (0-1) integer model and transportation model were constructed to determine the optimal equipment schedule and optimal overburden distribution, respectively. To solve this mixed integer program, the model was partitioned into its binary and real-valued components. Each problem was successively solved and their values added to form estimates of the value of the mixed integer program. Optimal convergence was indicated when the difference between two successive estimates satisfied some pre-specific accuracy value. The performance of the mixed integer model was tested against actual field data to determine its practical applications. To provide the necessary input information, production data was obtained from a single seam, mountaintop removal operation located in the Appalachian coal field. As a means of analyzing the resultant equipment schedule, the total idle time was calculated for each machine type and each lift location. Also, the final overburden assignments were analyzed by determining the distribution of spoil material for various overburden removal productivities. Subsequent validation of the mixed integer model was conducted in two distinct areas. The first dealt with changes in algorithmic data and their effects on the optimality of the model. The second area concerned variations in problem structure, specifically those dealing with changes in problem size and other user-inputed values such as equipment productivities or required reclamation.

  8. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    NASA Astrophysics Data System (ADS)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  9. Intelligent machining of rough components from optimized CAD models

    NASA Astrophysics Data System (ADS)

    Lewis, Geoff; Thompson, William

    1995-08-01

    This paper describes a technique for automatically generating NC machine programs from CAD images of a rough work piece and an optimally positioned component. The paper briefly compares the generative and variant methods of automatic machine program development and then presents a technique based on the variant method where a reference machine program is transformed to machine the optimized component. The transformed machine program is examined to remove any redundant cutter motions and correct any invalid cutter motions. The research is part of a larger project on intelligent manufacturing systems and is being conducted at the CIM Centre, Swinburne University of Technology, Hawthorn, Australia.

  10. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  11. Stochastic approach to reconstruction of dynamical systems: optimal model selection criterion

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E. M.; Feigin, A. M.

    2011-12-01

    Most of known observable systems are complex and high-dimensional that doesn't allow to make the exact long-term forecast of their behavior. The stochastic approach to reconstruction of such systems gives a hope to describe important qualitative features of their behavior in a low-dimensional way while all other dynamics is modelled as stochastic disturbance. This report is devoted to application of Bayesian evidence for optimal stochastic model selection when reconstructing the evolution operator of observable system. The idea of Bayesian evidence is to find compromise between the model predictiveness and quality of fitting the model into the data. We represent the evolution operator of investigated system in a form of random dynamic system including deterministic and stochastic parts, both parameterized by artificial neural network. Then we use Bayesian evidence criterion to estimate optimal complexity of the model, i.e. both number of parameters and dimension corresponding to most probable model given the data. We demonstrate on the number of model examples that the model with non-uniformly distributed stochastic part (which corresponds to non-Gaussian perturbations of evolution operator) is optimal in general case. Further, we show that simple stochastic model can be the most preferred for reconstruction of the evolution operator underlying complex observed dynamics even in a case of deterministic high-dimensional system. Workability of suggested approach for modeling and prognosis of real-measured geophysical dynamics is investigated.

  12. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    NASA Astrophysics Data System (ADS)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  13. Modeling, analysis and optimization of cylindrical stiffened panels for reusable launch vehicle structures

    NASA Astrophysics Data System (ADS)

    Venkataraman, Satchithanandam

    The design of reusable launch vehicles is driven by the need for minimum weight structures. Preliminary design of reusable launch vehicles requires many optimizations to select among competing structural concepts. Accurate models and analysis methods are required for such structural optimizations. Model, analysis, and optimization complexities have to be compromised to meet constraints on design cycle time and computational resources. Stiffened panels used in reusable launch vehicle tanks exhibit complex buckling failure modes. Using detailed finite element models for buckling analysis is too expensive for optimization. Many approximate models and analysis methods have been developed for design of stiffened panels. This dissertation investigates the use of approximate models and analysis methods implemented in PANDA2 software for preliminary design of stiffened panels. PANDA2 is also used for a trade study to compare weight efficiencies of stiffened panel concepts for a liquid hydrogen tank of a reusable launch vehicle. Optimum weights of stiffened panels are obtained for different materials, constructions and stiffener geometry. The study investigates the influence of modeling and analysis choices in PANDA2 on optimum designs. Complex structures usually require finite element analysis models to capture the details of their response. Design of complex structures must account for failure modes that are both global and local in nature. Often, different analysis models or computer programs are employed to calculate global and local structural response. Integration of different analysis programs is cumbersome and computationally expensive. Response surface approximation provides a global polynomial approximation that filters numerical noise present in discretized analysis models. The computational costs are transferred from optimization to development of approximate models. Using this process, the analyst can create structural response models that can be used by

  14. Abstinence-Conflict Model: Toward an Optimal Animal Model for Screening Medications Promoting Drug Abstinence.

    PubMed

    Peck, J A

    2016-01-01

    Drug addiction is a significant health and societal problem for which there is no highly effective long-term behavioral or pharmacological treatment. A rising concern are the use of illegal opiate drugs such as heroin and the misuse of legally available pain relievers that have led to serious deleterious health effects or even death. Therefore, treatment strategies that prolong opiate abstinence should be the primary focus of opiate treatment. Further, because the factors that support abstinence in humans and laboratory animals are similar, several animal models of abstinence and relapse have been developed. Here, we review a few animal models of abstinence and relapse and evaluate their validity and utility in addressing human behavior that leads to long-term drug abstinence. Then, a novel abstinence "conflict" model that more closely mimics human drug-seeking episodes by incorporating negative consequences for drug seeking (as are typical in humans, eg, incarceration and job loss) and while the drug remains readily available is discussed. Additionally, recent research investigating both cocaine and heroin seeking in rats using the animal conflict model is presented and the implications for heroin treatments are examined. Finally, it is argued that the use of animal abstinence/relapse models that more closely approximate human drug addiction, such as the abstinence-conflict model, could lead to a better understanding of the neurobiological and environmental factors that support long-term drug abstinence. In turn, this will lead to the development of more effective environmental and pharmacotherapeutic interventions to treat opiate addiction and addiction to other drugs of abuse. PMID:27055619

  15. Perceived and Implicit Ranking of Academic Journals: An Optimization Choice Model

    ERIC Educational Resources Information Center

    Xie, Frank Tian; Cai, Jane Z.; Pan, Yue

    2012-01-01

    A new system of ranking academic journals is proposed in this study and optimization choice model used to analyze data collected from 346 faculty members in a business discipline. The ranking model uses the aggregation of perceived, implicit sequencing of academic journals by academicians, therefore eliminating several key shortcomings of previous…

  16. The Model Optimization, Uncertainty, and SEnsitivity analysis (MOUSE) toolbox: overview and application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  17. Using genetic algorithm to solve a new multi-period stochastic optimization model

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Li; Zhang, Ke-Cun

    2009-09-01

    This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.

  18. Wind Tunnel Management and Resource Optimization: A Systems Modeling Approach

    NASA Technical Reports Server (NTRS)

    Jacobs, Derya, A.; Aasen, Curtis A.

    2000-01-01

    Time, money, and, personnel are becoming increasingly scarce resources within government agencies due to a reduction in funding and the desire to demonstrate responsible economic efficiency. The ability of an organization to plan and schedule resources effectively can provide the necessary leverage to improve productivity, provide continuous support to all projects, and insure flexibility in a rapidly changing environment. Without adequate internal controls the organization is forced to rely on external support, waste precious resources, and risk an inefficient response to change. Management systems must be developed and applied that strive to maximize the utility of existing resources in order to achieve the goal of "faster, cheaper, better". An area of concern within NASA Langley Research Center was the scheduling, planning, and resource management of the Wind Tunnel Enterprise operations. Nine wind tunnels make up the Enterprise. Prior to this research, these wind tunnel groups did not employ a rigorous or standardized management planning system. In addition, each wind tunnel unit operated from a position of autonomy, with little coordination of clients, resources, or project control. For operating and planning purposes, each wind tunnel operating unit must balance inputs from a variety of sources. Although each unit is managed by individual Facility Operations groups, other stakeholders influence wind tunnel operations. These groups include, for example, the various researchers and clients who use the facility, the Facility System Engineering Division (FSED) tasked with wind tunnel repair and upgrade, the Langley Research Center (LaRC) Fabrication (FAB) group which fabricates repair parts and provides test model upkeep, the NASA and LARC Strategic Plans, and unscheduled use of the facilities by important clients. Expanding these influences horizontally through nine wind tunnel operations and vertically along the NASA management structure greatly increases the

  19. Optimization of Evaporative Demand Models for Seasonal Drought Forecasting

    NASA Astrophysics Data System (ADS)

    McEvoy, D.; Huntington, J. L.; Hobbins, M.

    2015-12-01

    Providing reliable seasonal drought forecasts continues to pose a major challenge for scientists, end-users, and the water resources and agricultural communities. Precipitation (Prcp) forecasts beyond weather time scales are largely unreliable, so exploring new avenues to improve seasonal drought prediction is necessary to move towards applications and decision-making based on seasonal forecasts. A recent study has shown that evaporative demand (E0) anomaly forecasts from the Climate Forecast System Version 2 (CFSv2) are consistently more skillful than Prcp anomaly forecasts during drought events over CONUS, and E0 drought forecasts may be particularly useful during the growing season in the farming belts of the central and Midwestern CONUS. For this recent study, we used CFSv2 reforecasts to assess the skill of E0 and of its individual drivers (temperature, humidity, wind speed, and solar radiation), using the American Society for Civil Engineers Standardized Reference Evapotranspiration (ET0) Equation. Moderate skill was found in ET0, temperature, and humidity, with lesser skill in solar radiation, and no skill in wind. Therefore, forecasts of E0 based on models with no wind or solar radiation inputs may prove to be more skillful than the ASCE ET0. For this presentation we evaluate CFSv2 E0 reforecasts (1982-2009) from three different E0 models: (1) ASCE ET0; (2) Hargreaves and Samani (ET-HS), which is estimated from maximum and minimum temperature alone; and (3) Valiantzas (ET-V), which is a modified version of the Penman method for use when wind speed data are not available (or of poor quality) and is driven only by temperature, humidity, and solar radiation. The University of Idaho's gridded meteorological data (METDATA) were used as observations to evaluate CFSv2 and also to determine if ET0, ET-HS, and ET-V identify similar historical drought periods. We focus specifically on CFSv2 lead times of one, two, and three months, and season one forecasts; which are

  20. Discussion of skill improvement in marine ecosystem dynamic models based on parameter optimization and skill assessment

    NASA Astrophysics Data System (ADS)

    Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen

    2016-07-01

    Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.

  1. An approach to the multi-axis problem in manual control. [optimal pilot model

    NASA Technical Reports Server (NTRS)

    Harrington, W. W.

    1977-01-01

    The multiaxis control problem is addressed within the context of the optimal pilot model. The problem is developed to provide efficient adaptation of the optimal pilot model to complex aircraft systems and real world, multiaxis tasks. This is accomplished by establishing separability of the longitudinal and lateral control problems subject to the constraints of multiaxis attention and control allocation. Control solution adaptation to the constrained single axis attention allocations is provided by an optimal control frequency response algorithm. An algorithm is developed to solve the multiaxis control problem. The algorithm is then applied to an attitude hold task for a bare airframe fighter aircraft case with interesting multiaxis properties.

  2. Optimal harvesting of a stochastic delay logistic model with Lévy jumps

    NASA Astrophysics Data System (ADS)

    Qiu, Hong; Deng, Wenmin

    2016-10-01

    The optimal harvesting problem of a stochastic time delay logistic model with Lévy jumps is considered in this article. We first show that the model has a unique global positive solution and discuss the uniform boundedness of its pth moment with harvesting. Then we prove that the system is globally attractive and asymptotically stable in distribution under our assumptions. Furthermore, we obtain the existence of the optimal harvesting effort by the ergodic method, and then we give the explicit expression of the optimal harvesting policy and maximum yield.

  3. Models for optimal harvest with convex function of growth rate of a population

    SciTech Connect

    Lyashenko, O.I.

    1995-12-10

    Two models for growth of a population, which are described by a Cauchy problem for an ordinary differential equation with right-hand side depending on the population size and time, are investigated. The first model is time-discrete, i.e., the moments of harvest are fixed and discrete. The second model is time-continuous, i.e., a crop is harvested continuously in time. For autonomous systems, the second model is a particular case of the variational model for optimal control with constraints investigated in. However, the prerequisites and the method of investigation are somewhat different, for they are based on Lemma 1 presented below. In this paper, the existence and uniqueness theorem for the solution of the discrete and continuous problems of optimal harvest is proved, and the corresponding algorithms are presented. The results obtained are illustrated by a model for growth of the light-requiring green alga Chlorella.

  4. Determining the optimal planting density and land expectation value -- a numerical evaluation of decision model

    SciTech Connect

    Gong, P. . Dept. of Forest Economics)

    1998-08-01

    Different decision models can be constructed and used to analyze a regeneration decision in even-aged stand management. However, the optimal decision and management outcomes determined in an analysis may depend on the decision model used in the analysis. This paper examines the proper choice of decision model for determining the optimal planting density and land expectation value (LEV) for a Scots pine (Pinus sylvestris L.) plantation in northern Sweden. First, a general adaptive decision model for determining the regeneration alternative that maximizes the LEV is presented. This model recognizes future stand state and timber price uncertainties by including multiple stand state and timber price scenarios, and assumes that the harvest decision in each future period will be made conditional on the observed stand state and timber prices. Alternative assumptions about future stand states, timber prices, and harvest decisions can be incorporated into this general decision model, resulting in several different decision models that can be used to analyze a specific regeneration problem. Next, the consequences of choosing different modeling assumptions are determined using the example Scots pine plantation problem. Numerical results show that the most important sources of uncertainty that affect the optimal planting density and LEV are variations of the optimal clearcut time due to short-term fluctuations of timber prices. It is appropriate to determine the optimal planting density and harvest policy using an adaptive decision model that recognizes uncertainty only in future timber prices. After the optimal decisions have been found, however, the LEV should be re-estimated by incorporating both future stand state and timber price uncertainties.

  5. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  6. CFD modeling could optimize sorbent injection system efficiency

    SciTech Connect

    Blankinship, S.

    2006-01-15

    Several technologies will probably be needed to remove mercury from coal-plant stack emissions as mandated by new mercury emission control legislation in the USA. One of the most promising mercury removal approaches is the injection of a sorbent, such as powdered activated carbon (PAC), to make it much more controllable. ADA-ES recently simulated field tests of sorbent injection at New England Power Company's Brayton Point Power Plant in Somerset, Mass., where activated carbon sorbent was injected using a set of eight lances upstream of the second of two electrostatic precipitators (ESPs). Consultants from Fluent created a computational model of the ductwork and injection lances. The simulation results showed that the flue gas flow was poorly distributed at the sorbent injection plane, and that a small region of reverse flow occurred, a result of the flow pattern at the exit of the first ESP. The results also illustrated that the flow was predominantly in the lower half of the duct, and affected by some upstream turning vanes. The simulations demonstrated the value of CFD as a diagnostic tool. They were performed in a fraction of the time and cost required for the physical tests yet provided far more diagnostic information, such as the distribution of mercury and sorbent at each point in the computational domain. 1 fig.

  7. The human operator in manual preview tracking /an experiment and its modeling via optimal control/

    NASA Technical Reports Server (NTRS)

    Tomizuka, M.; Whitney, D. E.

    1976-01-01

    A manual preview tracking experiment and its results are presented. The preview drastically improves the tracking performance compared to zero-preview tracking. Optimal discrete finite preview control is applied to determine the structure of a mathematical model of the manual preview tracking experiment. Variable parameters in the model are adjusted to values which are consistent to the published data in manual control. The model with the adjusted parameters is found to be well correlated to the experimental results.

  8. Quantum Well Infrared Photodetectors (QWIPs) Optimization Based on Dark Current Models Evaluation

    NASA Astrophysics Data System (ADS)

    Favero, Priscila P.; Tanaka, Roberto Y.; Vieira, Gustavo S.; Muraro, Ademar; Abe, Nancy M.; Passaro, Angelo

    2011-12-01

    In this work we evaluate different dark current models applied in QWIPs. In order to avoid empirical parameters we calculate the sequential tunneling probability of the lowest energy state to reproduce the dark current properties. Using the same arguments, the photocurrent is replaced by the tunneling probability modulated by the optical matrix element. Considering the different tunneling models presented in literature we apply a genetic algorithm to evaluate the models role on the election of the optimized infrared sensor.

  9. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    SciTech Connect

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  10. Algorithms of D-optimal designs for Morgan Mercer Flodin (MMF) models with three parameters

    NASA Astrophysics Data System (ADS)

    Widiharih, Tatik; Haryatmi, Sri; Gunardi, Wilandari, Yuciana

    2016-02-01

    Morgan Mercer Flodin (MMF) model is used in many areas including biological growth studies, animal and husbandry, chemistry, finance, pharmacokinetics and pharmacodynamics. Locally D-optimal designs for Morgan Mercer Flodin (MMF) models with three parameters are investigated. We used the Generalized Equivalence Theorem of Kiefer and Wolvowitz to determine D-optimality criteria. Number of roots for standardized variance are determined using Tchebysheff system concept and it is used to decide that the design is minimally supported design. In these models, designs are minimally supported designs with uniform weight on its support, and the upper bound of the design region is a support point.

  11. A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation

    SciTech Connect

    Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin

    2016-01-01

    This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.

  12. An integrative and practical evolutionary optimization for a complex, dynamic model of biological networks.

    PubMed

    Maeda, Kazuhiro; Fukano, Yuya; Yamamichi, Shunsuke; Nitta, Daichi; Kurata, Hiroyuki

    2011-05-01

    Computer simulation is an important technique to capture the dynamics of biochemical networks. Numerical optimization is the key to estimate the values of kinetic parameters so that the dynamic model reproduces the behaviors of the existing experimental data. It is required to develop general strategies for the optimization of complex biochemical networks with a huge space of search parameters, under the condition that kinetic and quantitative data are hardly available. We propose an integrative and practical strategy for optimizing a complex dynamic model by using qualitative and incomplete experimental data. The key technologies are the divide and conquer method for reducing the search space, handling of multiple objective functions representing different types of biological behaviors, and design of rule-based objective functions that are suitable for qualitative and error-prone experimental data. This strategy is applied to optimizing a dynamic model of the yeast cell cycle to demonstrate the feasibility of it.

  13. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  14. Improving flash flood forecasting with distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yangbo

    2016-04-01

    In China, flash food is usually regarded as flood occured in small and medium sized watersheds with drainage area less than 200 km2, and is mainly induced by heavy rains, and occurs in where hydrological observation is lacked. Flash flood is widely observed in China, and is the flood causing the most casualties nowadays in China. Due to hydrological data scarcity, lumped hydrological model is difficult to be employed for flash flood forecasting which requires lots of observed hydrological data to calibrate model parameters. Physically based distributed hydrological model discrete the terrain of the whole watershed into a number of grid cells at fine resolution, assimilate different terrain data and precipitation to different cells, and derive model parameteris from the terrain properties, thus having the potential to be used in flash flood forecasting and improving flash flood prediction capability. In this study, the Liuxihe Model, a physically based distributed hydrological model mainly proposed for watershed flood forecasting is employed to simulate flash floods in the Ganzhou area in southeast China, and models have been set up in 5 watersheds. Model parameters have been derived from the terrain properties including the DEM, the soil type and land use type, but the result shows that the flood simulation uncertainty is high, which may be caused by parameter uncertainty, and some kind of uncertainty control is needed before the model could be used in real-time flash flood forecastin. Considering currently many Chinese small and medium sized watersheds has set up hydrological observation network, and a few flood events could be collected, it may be used for model parameter optimization. For this reason, an automatic model parameter optimization algorithm using Particle Swam Optimization(PSO) is developed to optimize the model parameters, and it has been found that model parameters optimized even only with one observed flood events could largely reduce the flood

  15. A system-level cost-of-energy wind farm layout optimization with landowner modeling

    SciTech Connect

    Chen, Le; MacDonald, Erin

    2013-10-01

    This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under two land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.

  16. Modeling and optimization of the line-driver power consumption in xDSL systems

    NASA Astrophysics Data System (ADS)

    Wolkerstorfer, Martin; Trautmann, Steffen; Nordström, Tomas; Putra, Bakti D.

    2012-12-01

    Optimization of the power spectrum alleviates the crosstalk noise in digital subscriber lines (DSL) and thereby reduces their power consumption at present. In order to truly assess the DSL system power consumption, this article presents realistic line driver (LD) power consumption models. These are applicable to any DSL system and extend previous models by parameterizing various circuit-level non-idealities. Based on the model of a class-AB LD we analyze the multi-user power spectrum optimization problem and propose novel algorithms for its global or approximate solution. The thereby obtained simulation results support our claim that this problem can be simplified with negligible performance loss by neglecting the LD model. This motivates the usage of established spectral optimization algorithms, which are shown to significantly reduce the LD power consumption compared to static spectrum management.

  17. Modelling and optimal placement of piezoelectric actuators in isotropic plates using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sadri, A. M.; Wright, J. R.; Wynne, R. J.

    1999-08-01

    Theoretical modelling of the vibration of plate components of a space structure incorporating piezoelectric actuators is presented. The equations governing the dynamics of the plate, relating the strains in the piezoelectric elements to the strain induced in the system, are derived for isotropic plates using the Rayleigh-Ritz method. The developed model was used for a simply supported plate. The results show that the model can predict natural frequencies of the plate very accurately. Two criteria for the optimal placement of piezoelectric actuators were suggested using modal controllability and the controllability Grammian. The model was then used to predict the closed-loop frequency response of the plate for active vibration control studies with optimal locations of actuators successfully obtained using genetic algorithms. Significant vibration suppression was demonstrated using optimal actuator placement algorithm developed.

  18. Modelling of Microalgae Culture Systems with Applications to Control and Optimization.

    PubMed

    Bernard, Olivier; Mairet, Francis; Chachuat, Benoît

    2016-01-01

    Mathematical modeling is becoming ever more important to assess the potential, guide the design, and enable the efficient operation and control of industrial-scale microalgae culture systems (MCS). The development of overall, inherently multiphysics, models involves coupling separate submodels of (i) the intrinsic biological properties, including growth, decay, and biosynthesis as well as the effect of light and temperature on these processes, and (ii) the physical properties, such as the hydrodynamics, light attenuation, and temperature in the culture medium. When considering high-density microalgae culture, in particular, the coupling between biology and physics becomes critical. This chapter reviews existing models, with a particular focus on the Droop model, which is a precursor model, and it highlights the structure common to many microalgae growth models. It summarizes the main developments and difficulties towards multiphysics models of MCS as well as applications of these models for monitoring, control, and optimization purposes.

  19. Optimal Complexity in Reservoir Modeling of an Eolian Sandstone for Carbon Sequestration Simulation

    NASA Astrophysics Data System (ADS)

    Li, S.; Zhang, Y.; Zhang, X.

    2011-12-01

    Geologic Carbon Sequestration (GCS) is a proposed means to reduce atmospheric concentrations of carbon dioxide (CO2). Given the type, abundance, and accessibility of geologic characterization data, different reservoir modeling techniques can be utilized to build a site model. However, petrophysical properties of a formation can be modeled with simplifying assumptions or with greater detail, the later requiring sophisticated modeling techniques supported by additional data. In GCS where cost of data collection needs to be minimized, will detailed (expensive) reservoir modeling efforts lead to much improved model predictive capability? Is there an optimal level of detail in the reservoir model sufficient for prediction purposes? In Wyoming, GCS into the Nugget Sandstone is proposed. This formation is a deep (>13,000 ft) saline aquifer deposited in eolian environments, exhibiting permeability heterogeneity at multiple scales. Based on a set of characterization data, this study utilizes multiple, increasingly complex reservoir modeling techniques to create a suite of reservoir models including a multiscale, non-stationary heterogeneous model conditioned to a soft depositional model (i.e., training image), a geostatistical (stationary) facies model without conditioning, a geostatistical (stationary) petrophysical model ignoring facies, and finally, a homogeneous model ignoring all aspects of sub-aquifer heterogeneity. All models are built at regional scale with a high-resolution grid (245,133,140 cells) from which a set of local simulation models (448,000 grid cells) are extracted. These are considered alternative conceptual models with which pilot-scale CO2 injection is simulated (50 year duration at 1/10 Mt per year). A computationally efficient sensitivity analysis (SA) is conducted for all models based on a Plackett-Burman Design of Experiment metric. The SA systematically varies key parameters of the models (e.g., variogram structure and principal axes of intrinsic

  20. Interfacing MATLAB and Python Optimizers to Black-Box Environmental Simulation Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Leung, K.; Tolson, B.

    2009-12-01

    A common approach for utilizing environmental models in a management or policy-analysis context is to incorporate them into a simulation-optimization framework - where an underlying process-based environmental model is linked with an optimization search algorithm. The optimization search algorithm iteratively adjusts various model inputs (i.e. parameters or design variables) in order to minimize an application-specific objective function computed on the basis of model outputs (i.e. response variables). Numerous optimization algorithms have been applied to the simulation-optimization of environmental systems and this research investigated the use of optimization libraries and toolboxes that are readily available in MATLAB and Python - two popular high-level programming languages. Inspired by model-independent calibration codes (e.g. PEST and UCODE), a small piece of interface software (known as PIGEON) was developed. PIGEON allows users to interface Python and MATLAB optimizers with arbitrary black-box environmental models without writing any additional interface code. An initial set of benchmark tests (involving more than 20 MATLAB and Python optimization algorithms) were performed to validate the interface software - results highlight the need to carefully consider such issues as numerical precision in output files and enforcement (or not) of parameter limits. Additional benchmark testing considered the problem of fitting isotherm expressions to laboratory data - with an emphasis on dual-mode expressions combining non-linear isotherms with a linear partitioning component. With respect to the selected isotherm fitting problems, derivative-free search algorithms significantly outperformed gradient-based algorithms. Attempts to improve gradient-based performance, via parameter tuning and also via several alternative multi-start approaches, were largely unsuccessful.

  1. Optimized Finite-Difference Coefficients for Hydroacoustic Modeling

    NASA Astrophysics Data System (ADS)

    Preston, L. A.

    2014-12-01

    Responsible utilization of marine renewable energy sources through the use of current energy converter (CEC) and wave energy converter (WEC) devices requires an understanding of the noise generation and propagation from these systems in the marine environment. Acoustic noise produced by rotating turbines, for example, could adversely affect marine animals and human-related marine activities if not properly understood and mitigated. We are utilizing a 3-D finite-difference acoustic simulation code developed at Sandia that can accurately propagate noise in the complex bathymetry in the near-shore to open ocean environment. As part of our efforts to improve computation efficiency in the large, high-resolution domains required in this project, we investigate the effects of using optimized finite-difference coefficients on the accuracy of the simulations. We compare accuracy and runtime of various finite-difference coefficients optimized via criteria such as maximum numerical phase speed error, maximum numerical group speed error, and L-1 and L-2 norms of weighted numerical group and phase speed errors over a given spectral bandwidth. We find that those coefficients optimized for L-1 and L-2 norms are superior in accuracy to those based on maximal error and can produce runtimes of 10% of the baseline case, which uses Taylor Series finite-difference coefficients at the Courant time step limit. We will present comparisons of the results for the various cases evaluated as well as recommendations for utilization of the cases studied. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Computational wing optimization and comparisons with experiment for a semi-span wing model

    NASA Technical Reports Server (NTRS)

    Waggoner, E. G.; Haney, H. P.; Ballhaus, W. F.

    1978-01-01

    A computational wing optimization procedure was developed and verified by an experimental investigation of a semi-span variable camber wing model in the NASA Ames Research Center 14 foot transonic wind tunnel. The Bailey-Ballhaus transonic potential flow analysis and Woodward-Carmichael linear theory codes were linked to Vanderplaats constrained minimization routine to optimize model configurations at several subsonic and transonic design points. The 35 deg swept wing is characterized by multi-segmented leading and trailing edge flaps whose hinge lines are swept relative to the leading and trailing edges of the wing. By varying deflection angles of the flap segments, camber and twist distribution can be optimized for different design conditions. Results indicate that numerical optimization can be both an effective and efficient design tool. The optimized configurations had as good or better lift to drag ratios at the design points as the best designs previously tested during an extensive parametric study.

  3. Numerical computation of the optimal vector field: Exemplified by a fishery model

    PubMed Central

    Grass, D.

    2012-01-01

    Numerous optimal control models analyzed in economics are formulated as discounted infinite time horizon problems, where the defining functions are nonlinear as well in the states as in the controls. As a consequence solutions can often only be found numerically. Moreover, the long run optimal solutions are mostly limit sets like equilibria or limit cycles. Using these specific solutions a BVP approach together with a continuation technique is used to calculate the parameter dependent dynamic structure of the optimal vector field. We use a one dimensional optimal control model of a fishery to exemplify the numerical techniques. But these methods are applicable to a much wider class of optimal control problems with a moderate number of state and control variables. PMID:25505805

  4. Integrated optimal allocation model for complex adaptive system of water resources management (I): Methodologies

    NASA Astrophysics Data System (ADS)

    Zhou, Yanlai; Guo, Shenglian; Xu, Chong-Yu; Liu, Dedi; Chen, Lu; Ye, Yushi

    2015-12-01

    Due to the adaption, dynamic and multi-objective characteristics of complex water resources system, it is a considerable challenge to manage water resources in an efficient, equitable and sustainable way. An integrated optimal allocation model is proposed for complex adaptive system of water resources management. The model consists of three modules: (1) an agent-based module for revealing evolution mechanism of complex adaptive system using agent-based, system dynamic and non-dominated sorting genetic algorithm II methods, (2) an optimal module for deriving decision set of water resources allocation using multi-objective genetic algorithm, and (3) a multi-objective evaluation module for evaluating the efficiency of the optimal module and selecting the optimal water resources allocation scheme using project pursuit method. This study has provided a theoretical framework for adaptive allocation, dynamic allocation and multi-objective optimization for a complex adaptive system of water resources management.

  5. Optimization of biomass torrefaction conditions by the gain and loss method and regression model analysis.

    PubMed

    Lee, Soo Min; Lee, Jae-Won

    2014-11-01

    In this study, the optimal conditions for biomass torrefaction were determined by comparing the gain of energy content to the weight loss of biomass from the final products. Torrefaction experiments were performed at temperatures ranging from 220 to 280°C using 20-80min reaction times. Polynomial regression models ranging from the 1st to the 3rd order were used to determine a relationship between the severity factor (SF) and calorific value or weight loss. The intersection of two regression models for calorific value and weight loss was determined and assumed to be the optimized SF. The optimized SFs on each biomass ranged from 6.056 to 6.372. Optimized torrefaction conditions were determined at various reaction times of 15, 30, and 60min. The average optimized temperature was 248.55°C in the studied biomass when torrefaction was performed for 60min.

  6. Reduced-Order Model for Dynamic Optimization of Pressure Swing Adsorption

    SciTech Connect

    Agarwal, Anshul; Biegler, L.T.; Zitney, S.E.

    2007-11-01

    The last few decades have seen a considerable increase in the applications of adsorptive gas separation technologies, such as pressure swing adsorption (PSA). From an economic and environmental point of view, hydrogen separation and carbon dioxide capture from flue gas streams are the most promising applications of PSA. With extensive industrial applications, there is a significant interest for an efficient modeling, simulation, and optimization strategy. However, the design and optimization of the PSA processes have largely remained an experimental effort because of the complex nature of the mathematical models describing practical PSA processes. The separation processes are based on solid-gas equilibrium and operate under periodic transient conditions. Models for PSA processes are therefore multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together and high nonlinearities arising from non-isothermal effects. The computational effort required to solve such systems is usually quite expensive and prohibitively time consuming. Besides this, stringent product specifications, required by many industrial processes, often lead to convergence failures of the optimizers. The solution of this coupled stiff PDE system is governed by steep concentrations and temperature fronts moving with time. As a result, the optimization of such systems for either design or operation represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Sophisticated optimization strategies have been developed and applied to PSA systems with significant improvement in the performance of the process. However, most of these approaches have been quite time consuming. This gives a strong motivation to develop cost-efficient and robust optimization strategies for PSA processes. Moreover, in case of flowsheet

  7. Optimization of an electromagnetic linear actuator using a network and a finite element model

    NASA Astrophysics Data System (ADS)

    Neubert, Holger; Kamusella, Alfred; Lienig, Jens

    2011-03-01

    Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.

  8. Optimization of a Two-Fluid Hydrodynamic Model of Churn-Turbulent Flow

    SciTech Connect

    Donna Post Guillen

    2009-07-01

    A hydrodynamic model of two-phase, churn-turbulent flows is being developed using the computational multiphase fluid dynamics (CMFD) code, NPHASE-CMFD. The numerical solutions obtained by this model are compared with experimental data obtained at the TOPFLOW facility of the Institute of Safety Research at the Forschungszentrum Dresden-Rossendorf. The TOPFLOW data is a high quality experimental database of upward, co-current air-water flows in a vertical pipe suitable for validation of computational fluid dynamics (CFD) codes. A five-field CMFD model was developed for the continuous liquid phase and four bubble size groups using mechanistic closure models for the ensemble-averaged Navier-Stokes equations. Mechanistic models for the drag and non-drag interfacial forces are implemented to include the governing physics to describe the hydrodynamic forces controlling the gas distribution. The closure models provide the functional form of the interfacial forces, with user defined coefficients to adjust the force magnitude. An optimization strategy was devised for these coefficients using commercial design optimization software. This paper demonstrates an approach to optimizing CMFD model parameters using a design optimization approach. Computed radial void fraction profiles predicted by the NPHASE-CMFD code are compared to experimental data for four bubble size groups.

  9. An Optimization Model for Plug-In Hybrid Electric Vehicles

    SciTech Connect

    Malikopoulos, Andreas; Smith, David E

    2011-01-01

    The necessity for environmentally conscious vehicle designs in conjunction with increasing concerns regarding U.S. dependency on foreign oil and climate change have induced significant investment towards enhancing the propulsion portfolio with new technologies. More recently, plug-in hybrid electric vehicles (PHEVs) have held great intuitive appeal and have attracted considerable attention. PHEVs have the potential to reduce petroleum consumption and greenhouse gas (GHG) emissions in the commercial transportation sector. They are especially appealing in situations where daily commuting is within a small amount of miles with excessive stop-and-go driving. The research effort outlined in this paper aims to investigate the implications of motor/generator and battery size on fuel economy and GHG emissions in a medium-duty PHEV. An optimization framework is developed and applied to two different parallel powertrain configurations, e.g., pre-transmission and post-transmission, to derive the optimal design with respect to motor/generator and battery size. A comparison between the conventional and PHEV configurations with equivalent size and performance under the same driving conditions is conducted, thus allowing an assessment of the fuel economy and GHG emissions potential improvement. The post-transmission parallel configuration yields higher fuel economy and less GHG emissions compared to pre-transmission configuration partly attributable to the enhanced regenerative braking efficiency.

  10. Optimal Determination of Respiratory Airflow Patterns Using a Nonlinear Multicompartment Model for a Lung Mechanics System

    PubMed Central

    Li, Hancao; Haddad, Wassim M.

    2012-01-01

    We develop optimal respiratory airflow patterns using a nonlinear multicompartment model for a lung mechanics system. Specifically, we use classical calculus of variations minimization techniques to derive an optimal airflow pattern for inspiratory and expiratory breathing cycles. The physiological interpretation of the optimality criteria used involves the minimization of work of breathing and lung volume acceleration for the inspiratory phase, and the minimization of the elastic potential energy and rapid airflow rate changes for the expiratory phase. Finally, we numerically integrate the resulting nonlinear two-point boundary value problems to determine the optimal airflow patterns over the inspiratory and expiratory breathing cycles. PMID:22719793

  11. Optimality criteria-based topology optimization of a bi-material model for acoustic-structural coupled systems

    NASA Astrophysics Data System (ADS)

    Shang, Linyuan; Zhao, Guozhong

    2016-06-01

    This article investigates topology optimization of a bi-material model for acoustic-structural coupled systems. The design variables are volume fractions of inclusion material in a bi-material model constructed by the microstructure-based design domain method (MDDM). The design objective is the minimization of sound pressure level (SPL) in an interior acoustic medium. Sensitivities of SPL with respect to topological design variables are derived concretely by the adjoint method. A relaxed form of optimality criteria (OC) is developed for solving the acoustic-structural coupled optimization problem to find the optimum bi-material distribution. Based on OC and the adjoint method, a topology optimization method to deal with large calculations in acoustic-structural coupled problems is proposed. Numerical examples are given to illustrate the applications of topology optimization for a bi-material plate under a low single-frequency excitation and an aerospace structure under a low frequency-band excitation, and to prove the efficiency of the adjoint method and the relaxed form of OC.

  12. Bayesian geostatistical design: Task-driven optimal site investigation when the geostatistical model is uncertain

    NASA Astrophysics Data System (ADS)

    Nowak, W.; de Barros, F. P. J.; Rubin, Y.

    2010-03-01

    Geostatistical optimal design optimizes subsurface exploration for maximum information toward task-specific prediction goals. Until recently, most geostatistical design studies have assumed that the geostatistical description (i.e., the mean, trends, covariance models and their parameters) is given a priori. This contradicts, as emphasized by Rubin and Dagan (1987a), the fact that only few or even no data at all offer support for such assumptions prior to the bulk of exploration effort. We believe that geostatistical design should (1) avoid unjustified a priori assumptions on the geostatistical description, (2) instead reduce geostatistical model uncertainty as secondary design objective, (3) rate this secondary objective optimal for the overall prediction goal, and (4) be robust even under inaccurate geostatistical assumptions. Bayesian Geostatistical Design follows these guidelines by considering uncertain covariance model parameters. We transfer this concept from kriging-like applications to geostatistical inverse problems. We also deem it inappropriate to consider parametric uncertainty only within a single covariance model. The Matérn family of covariance functions has an additional shape parameter. Controlling model shape by a parameter converts covariance model selection to parameter identification and resembles Bayesian model averaging over a continuous spectrum of covariance models. This is appealing since it generalizes Bayesian model averaging from a finite number to an infinite number of models. We illustrate how our approach fulfills the above four guidelines in a series of synthetic test cases. The underlying scenarios are to minimize the prediction variance of (1) contaminant concentration or (2) arrival time at an ecologically sensitive location by optimal placement of hydraulic head and log conductivity measurements. Results highlight how both the impact of geostatistical model uncertainty and the sampling network design vary according to the

  13. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  14. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  15. Optimal observation network design for conceptual model discrimination and uncertainty reduction

    NASA Astrophysics Data System (ADS)

    Pham, Hai V.; Tsai, Frank T.-C.

    2016-02-01

    This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.

  16. Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling

    SciTech Connect

    Agarwal, A.; Biegler, L.; Zitney, S.

    2009-01-01

    Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approach to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.

  17. Initial Conditions for Optimal Growth in a Coupled Ocean-Atmosphere Model of ENSO*.

    NASA Astrophysics Data System (ADS)

    Thompson, C. J.

    1998-02-01

    Several studies have examined the conditions in the equatorial Pacific basin that lead to the maximum growth over a fixed time period, . These studies have the purpose of finding the characteristic precursor to an ENSO warm event, or more generally to explore error growth and predictability of the coupled ocean-atmosphere system. This paper develops a linearized version of the Battisti model (similar to the Zebiak-Cane model) with a time-invariant background state. The optimal initial conditions for time period (-optimals) were computed for a range of and for a selection of background states.A number of interesting characteristics of the -optimals emerged: 1) The -optimals grow more quickly than even the most unstable mode (the ENSO mode) of the system. 2) The -optimals develop quickly into the ENSO mode-in around 90 days. 3) The ENSO mode produced by a given -optimal does not in general peak at time . For less than 360 days the ENSO modes peak after time , and for greater than 360 days the ENSO mode first peaks before . At 360 days, designated max, the ENSO mode peaks at : this is also the -optimal, which produces the most growth. 4) Optimals were produced that used the SST only (T-optimals) and that used only the ocean dynamics (r-optimals). It is shown that for greater than 60 days, these two optimals both produce ENSO modes (of the same phase). This result makes a comparison of the relative importance of the SST versus the ocean dynamics straightforward: A T-optimal pattern with a 0.1 degree anomaly produces the same size ENSO as an r-optimal pattern with 1.2-m thermocline anomaly. 5) It is shown that the full optimal is the linear combination of these two suboptimals, where their relative sizes are determined by their relative weights (in the norm used).The paper also experiments with a neutral and a damped version of the model

  18. Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes

    SciTech Connect

    Felice, Maria V.; Velichko, Alexander Wilcox, Paul D.; Barden, Tim; Dunhill, Tony

    2015-03-31

    Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.

  19. Surrogate modelling and optimization using shape-preserving response prediction: A review

    NASA Astrophysics Data System (ADS)

    Leifsson, Leifur; Koziel, Slawomir

    2016-03-01

    Computer simulation models are ubiquitous in modern engineering design. In many cases, they are the only way to evaluate a given design with sufficient fidelity. Unfortunately, an added computational expense is associated with higher fidelity models. Moreover, the systems being considered are often highly nonlinear and may feature a large number of designable parameters. Therefore, it may be impractical to solve the design problem with conventional optimization algorithms. A promising approach to alleviate these difficulties is surrogate-based optimization (SBO). Among proven SBO techniques, the methods utilizing surrogates constructed from corrected physics-based low-fidelity models are, in many cases, the most efficient. This article reviews a particular technique of this type, namely, shape-preserving response prediction (SPRP), which works on the level of the model responses to correct the underlying low-fidelity models. The formulation and limitations of SPRP are discussed. Applications to several engineering design problems are provided.

  20. Cancer risk assessment: Optimizing human health through linear dose-response models.

    PubMed

    Calabrese, Edward J; Shamoun, Dima Yazji; Hanekamp, Jaap C

    2015-07-01

    This paper proposes that generic cancer risk assessments be based on the integration of the Linear Non-Threshold (LNT) and hormetic dose-responses since optimal hormetic beneficial responses are estimated to occur at the dose associated with a 10(-4) risk level based on the use of a LNT model as applied to animal cancer studies. The adoption of the 10(-4) risk estimate provides a theoretical and practical integration of two competing risk assessment models whose predictions cannot be validated in human population studies or with standard chronic animal bioassay data. This model-integration reveals both substantial protection of the population from cancer effects (i.e. functional utility of the LNT model) while offering the possibility of significant reductions in cancer incidence should the hormetic dose-response model predictions be correct. The dose yielding the 10(-4) cancer risk therefore yields the optimized toxicologically based "regulatory sweet spot". PMID:25916915

  1. Using ILOG OPL-CPLEX and ILOG Optimization Decision Manager (ODM) to Develop Better Models

    NASA Astrophysics Data System (ADS)

    2008-10-01

    This session will provide an in-depth overview on building state-of-the-art decision support applications and models. You will learn how to harness the full power of the ILOG OPL-CPLEX-ODM Development System (ODMS) to develop optimization models and decision support applications that solve complex problems ranging from near real-time scheduling to long-term strategic planning. We will demonstrate how to use ILOG's Open Programming Language (OPL) to quickly model problems solved by ILOG CPLEX, and how to use ILOG ODM to gain further insight about the model. By the end of the session, attendees will understand how to take advantage of the powerful combination of ILOG OPL (to describe an optimization model) and ILOG ODM (to understand the relationships between data, decision variables and constraints).

  2. An optimal hierarchical decision model for a regional logistics network with environmental impact consideration.

    PubMed

    Zhang, Dezhi; Li, Shuangyan; Qin, Jin

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209

  3. An Optimal Hierarchical Decision Model for a Regional Logistics Network with Environmental Impact Consideration

    PubMed Central

    Zhang, Dezhi; Li, Shuangyan

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209

  4. An optimal hierarchical decision model for a regional logistics network with environmental impact consideration.

    PubMed

    Zhang, Dezhi; Li, Shuangyan; Qin, Jin

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.

  5. Stochastic modelling of turbulent combustion for design optimization of gas turbine combustors

    NASA Astrophysics Data System (ADS)

    Mehanna Ismail, Mohammed Ali

    The present work covers the development and the implementation of an efficient algorithm for the design optimization of gas turbine combustors. The purpose is to explore the possibilities and indicate constructive suggestions for optimization techniques as alternative methods for designing gas turbine combustors. The algorithm is general to the extent that no constraints are imposed on the combustion phenomena or on the combustor configuration. The optimization problem is broken down into two elementary problems: the first is the optimum search algorithm, and the second is the turbulent combustion model used to determine the combustor performance parameters. These performance parameters constitute the objective and physical constraints in the optimization problem formulation. The examination of both turbulent combustion phenomena and the gas turbine design process suggests that the turbulent combustion model represents a crucial part of the optimization algorithm. The basic requirements needed for a turbulent combustion model to be successfully used in a practical optimization algorithm are discussed. In principle, the combustion model should comply with the conflicting requirements of high fidelity, robustness and computational efficiency. To that end, the problem of turbulent combustion is discussed and the current state of the art of turbulent combustion modelling is reviewed. According to this review, turbulent combustion models based on the composition PDF transport equation are found to be good candidates for application in the present context. However, these models are computationally expensive. To overcome this difficulty, two different models based on the composition PDF transport equation were developed: an improved Lagrangian Monte Carlo composition PDF algorithm and the generalized stochastic reactor model. Improvements in the Lagrangian Monte Carlo composition PDF model performance and its computational efficiency were achieved through the

  6. A Study of Mechanical Optimization Strategy for Cardiac Resynchronization Therapy Based on an Electromechanical Model

    PubMed Central

    Dou, Jianhong; Xia, Ling; Deng, Dongdong; Zang, Yunliang; Shou, Guofa; Bustos, Cesar; Tu, Weifeng; Liu, Feng; Crozier, Stuart

    2012-01-01

    An optimal electrode position and interventricular (VV) delay in cardiac resynchronization therapy (CRT) improves its success. However, the precise quantification of cardiac dyssynchrony and magnitude of resynchronization achieved by biventricular (BiV) pacing therapy with mechanical optimization strategies based on computational models remain scant. The maximum circumferential uniformity ratio estimate (CURE) was used here as mechanical optimization index, which was automatically computed for 6 different electrode positions based on a three-dimensional electromechanical canine model of heart failure (HF) caused by complete left bundle branch block (CLBBB). VV delay timing was adjusted accordingly. The heart excitation propagation was simulated with a monodomain model. The quantification of mechanical intra- and interventricular asynchrony was then investigated with eight-node isoparametric element method. The results showed that (i) the optimal pacing location from maximal CURE of 0.8516 was found at the left ventricle (LV) lateral wall near the equator site with a VV delay of 60 ms, in accordance with current clinical studies, (ii) compared with electrical optimization strategy of ERMS, the LV synchronous contraction and the hemodynamics improved more with mechanical optimization strategy. Therefore, measures of mechanical dyssynchrony improve the sensitivity and specificity of predicting responders more. The model was subject to validation in future clinical studies. PMID:23118802

  7. Modeling and optimization of a multi-product biosynthesis factory for multiple objectives.

    PubMed

    Lee, Fook Choon; Pandu Rangaiah, Gade; Lee, Dong-Yup

    2010-05-01

    Genetic algorithms and optimization in general, enable us to probe deeper into the metabolic pathway recipe for multi-product biosynthesis. An augmented model for optimizing serine and tryptophan flux ratios simultaneously in Escherichia coli, was developed by linking the dynamic tryptophan operon model and aromatic amino acid-tryptophan biosynthesis pathways to the central carbon metabolism model. Six new kinetic parameters of the augmented model were estimated with considerations of available experimental data and other published works. Major differences between calculated and reference concentrations and fluxes were explained. Sensitivities and underlying competition among fluxes for carbon sources were consistent with intuitive expectations based on metabolic network and previous results. Biosynthesis rates of serine and tryptophan were simultaneously maximized using the augmented model via concurrent gene knockout and manipulation. The optimization results were obtained using the elitist non-dominant sorting genetic algorithm (NSGA-II) supported by pattern recognition heuristics. A range of Pareto-optimal enzyme activities regulating the amino acids biosynthesis was successfully obtained and elucidated wherever possible vis-à-vis fermentation work based on recombinant DNA technology. The predicted potential improvements in various metabolic pathway recipes using the multi-objective optimization strategy were highlighted and discussed in detail. PMID:20051269

  8. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  9. Evaluation of the chondral modeling theory using fe-simulation and numeric shape optimization

    PubMed Central

    Plochocki, Jeffrey H; Ward, Carol V; Smith, Douglas E

    2009-01-01

    The chondral modeling theory proposes that hydrostatic pressure within articular cartilage regulates joint size, shape, and congruence through regional variations in rates of tissue proliferation.The purpose of this study is to develop a computational model using a nonlinear two-dimensional finite element analysis in conjunction with numeric shape optimization to evaluate the chondral modeling theory. The model employed in this analysis is generated from an MR image of the medial portion of the tibiofemoral joint in a subadult male. Stress-regulated morphological changes are simulated until skeletal maturity and evaluated against the chondral modeling theory. The computed results are found to support the chondral modeling theory. The shape-optimized model exhibits increased joint congruence, broader stress distributions in articular cartilage, and a relative decrease in joint diameter. The results for the computational model correspond well with experimental data and provide valuable insights into the mechanical determinants of joint growth. The model also provides a crucial first step toward developing a comprehensive model that can be employed to test the influence of mechanical variables on joint conformation. PMID:19438771

  10. Optimal prediction for moment models: crescendo diffusion and reordered equations

    NASA Astrophysics Data System (ADS)

    Seibold, Benjamin; Frank, Martin

    2009-12-01

    A direct numerical solution of the radiative transfer equation or any kinetic equation is typically expensive, since the radiative intensity depends on time, space and direction. An expansion in the direction variables yields an equivalent system of infinitely many moments. A fundamental problem is how to truncate the system. Various closures have been presented in the literature. We want to generally study the moment closure within the framework of optimal prediction, a strategy to approximate the mean solution of a large system by a smaller system, for radiation moment systems. We apply this strategy to radiative transfer and show that several closures can be re-derived within this framework, such as P N , diffusion, and diffusion correction closures. In addition, the formalism gives rise to new parabolic systems, the reordered P N equations, that are similar to the simplified P N equations. Furthermore, we propose a modification to existing closures. Although simple and with no extra cost, this newly derived crescendo diffusion yields better approximations in numerical tests.

  11. A realistic model under which the genetic code is optimal.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-10-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  12. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  13. Subject-Specific Planning of Femoroplasty: A Combined Evolutionary Optimization and Particle Diffusion Model Approach

    PubMed Central

    Basafa, Ehsan; Armand, Mehran

    2014-01-01

    A potential effective treatment for prevention of osteoporotic hip fractures is augmentation of the mechanical properties of the femur by injecting it with agents such as (PMMA) bone cement – femoroplasty. The operation, however, is only in research stage and can benefit substantially from computer planning and optimization. We report the results of computational planning and optimization of the procedure for biomechanical evaluation. An evolutionary optimization method was used to optimally place the cement in finite element (FE) models of seven osteoporotic bone specimens. The optimization, with some inter-specimen variations, suggested that areas close to the cortex in the superior and inferior of the neck and supero-lateral aspect of the greater trochanter will benefit from augmentation. We then used a particle-based model for bone cement diffusion simulation to match the optimized pattern, taking into account the limitations of the actual surgery, including limited volume of injection to prevent thermal necrosis. Simulations showed that the yield load can be significantly increased by more than 30%, using only 9ml of bone cement. This increase is comparable to previous literature reports where gross filling of the bone was employed instead, using more than 40ml of cement. These findings, along with the differences in the optimized plans between specimens, emphasize the need for subject-specific models for effective planning of femoral augmentation. PMID:24856887

  14. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  15. Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers

    PubMed Central

    Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.

    2016-01-01

    The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362

  16. Parameter identification of a distributed runoff model by the optimization software Colleo

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Anai, Hirokazu; Iwami, Yoichi

    2015-04-01

    The introduction of Colleo (Collection of Optimization software) is presented and case studies of parameter identification for a distributed runoff model are illustrated. In order to calculate discharge of rivers accurately, a distributed runoff model becomes widely used to take into account various land usage, soil-type and rainfall distribution. Feasibility study of parameter optimization is desired to be done in two steps. The first step is to survey which optimization algorithms are suitable for the problems of interests. The second step is to investigate the performance of the specific optimization algorithm. Most of the previous studies seem to focus on the second step. This study will focus on the first step and complement the previous studies. Many optimization algorithms have been proposed in the computational science field and a large number of optimization software have been developed and opened to the public with practically applicable performance and quality. It is well known that it is important to use suitable algorithms for the problems to obtain good optimization results efficiently. In order to achieve algorithm comparison readily, optimization software is needed with which performance of many algorithms can be compared and can be connected to various simulation software. Colleo is developed to satisfy such needs. Colleo provides a unified user interface to several optimization software such as pyOpt, NLopt, inspyred and R and helps investigate the suitability of optimization algorithms. 74 different implementations of optimization algorithms, Nelder-Mead, Particle Swarm Optimization and Genetic Algorithm, are available with Colleo. The effectiveness of Colleo was demonstrated with the cases of flood events of the Gokase River basin in Japan (1820km2). From 2002 to 2010, there were 15 flood events, in which the discharge exceeded 1000m3/s. The discharge was calculated with the PWRI distributed hydrological model developed by ICHARM. The target

  17. Oyster Creek cycle 10 nodal model parameter optimization study using PSMS

    SciTech Connect

    Dougher, J.D.

    1987-01-01

    The power shape monitoring system (PSMS) is an on-line core monitoring system that uses a three-dimensional nodal code (NODE-B) to perform nodal power calculations and compute thermal margins. The PSMS contains a parameter optimization function that improves the ability of NODE-B to accurately monitor core power distributions. This functions iterates on the model normalization parameters (albedos and mixing factors) to obtain the best agreement between predicted and measured traversing in-core probe (TIP) reading on a statepoint-by-statepoint basis. Following several statepoint optimization runs, an average set of optimized normalization parameters can be determined and can be implemented into the current or subsequent cycle core model for on-line core monitoring. A statistical analysis of 19 high-power steady-state state-points throughout Oyster Creek cycle 10 operation has shown a consistently poor virgin model performance. The normalization parameters used in the cycle 10 NODE-B model were based on a cycle 8 study, which evaluated only Exxon fuel types. The introduction of General Electric (GE) fuel into cycle 10 (172 assemblies) was a significant fuel/core design change that could have altered the optimum set of normalization parameters. Based on the need to evaluate a potential change in the model normalization parameters for cycle 11 and in an attempt to account for the poor cycle 10 model performance, a parameter optimization study was performed.

  18. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  19. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  20. Model Predictive Optimal Control of a Time-Delay Distributed-Parameter Systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2006-01-01

    This paper presents an optimal control method for a class of distributed-parameter systems governed by first order, quasilinear hyperbolic partial differential equations that arise in many physical systems. Such systems are characterized by time delays since information is transported from one state to another by wave propagation. A general closed-loop hyperbolic transport model is controlled by a boundary control embedded in a periodic boundary condition. The boundary control is subject to a nonlinear differential equation constraint that models actuator dynamics of the system. The hyperbolic equation is thus coupled with the ordinary differential equation via the boundary condition. Optimality of this coupled system is investigated using variational principles to seek an adjoint formulation of the optimal control problem. The results are then applied to implement a model predictive control design for a wind tunnel to eliminate a transport delay effect that causes a poor Mach number regulation.

  1. Inverse hydrograph routing optimization model based on the kinematic wave approach

    NASA Astrophysics Data System (ADS)

    Saghafian, B.; Jannaty, M. H.; Ezami, N.

    2015-08-01

    This article presents and validates the inverse flood hydrograph routing optimization model under kinematic wave (KW) approximation in order to produce the upstream (inflow) hydrograph, given the downstream (outflow) hydrograph of a river reach. The cost function involves minimization of the error between the observed outflow hydrograph and the corresponding directly routed outflow hydrograph. Decision variables are the inflow hydrograph ordinates. The KW and genetic algorithm (GA) are coupled, representing the selected methods of direct routing and optimization, respectively. A local search technique is also enforced to achieve better agreement of the routed outflow hydrograph with the observed hydrograph. Computer programs handling the direct flood routing, cost function and local search are linked with the optimization model. The results show that the case study inflow hydrographs obtained by the GA were reconstructed with accuracy. It was also concluded that the coupled KW-GA model framework can perform inverse hydrograph routing with numerical stability.

  2. Optimal vaccination in a stochastic epidemic model of two non-interacting populations.

    PubMed

    Yuan, Edwin C; Alderson, David L; Stromberg, Sean; Carlson, Jean M

    2015-01-01

    Developing robust, quantitative methods to optimize resource allocations in response to epidemics has the potential to save lives and minimize health care costs. In this paper, we develop and apply a computationally efficient algorithm that enables us to calculate the complete probability distribution for the final epidemic size in a stochastic Susceptible-Infected-Recovered (SIR) model. Based on these results, we determine the optimal allocations of a limited quantity of vaccine between two non-interacting populations. We compare the stochastic solution to results obtained for the traditional, deterministic SIR model. For intermediate quantities of vaccine, the deterministic model is a poor estimate of the optimal strategy for the more realistic, stochastic case.

  3. Optimal regulator or conventional? Setup techniques for a model following simulator control system

    NASA Technical Reports Server (NTRS)

    Deets, D. A.

    1978-01-01

    Optimal regulator technique was compared for determining simulator control system gains with the conventional servo analysis approach. Practical considerations, associated with airborne motion simulation using a model-following system, provided the basis for comparison. The simulation fidelity specifications selected were important in evaluating the relative advantages of the two methods. Frequency responses for a JetStar aircraft following a roll mode model were calculated digitally to illustrate the various cases. A technique for generating forward loop lead in the optimal regulator model-following problem was developed which increases the flexibility of that approach. It appeared to be the only way in which the optimal regulator method could meet the fidelity specifications.

  4. Three-dimensional magnetic optimization of accelerator magnets using an analytic strip model

    SciTech Connect

    Rochepault, Etienne Aubert, Guy; Vedrine, Pierre

    2014-07-14

    The end design is a critical step in the design of superconducting accelerator magnets. First, the strain energy of the conductors must be minimized, which can be achieved using differential geometry. The end design also requires an optimization of the magnetic field homogeneity. A mechanical and magnetic model for the conductors, using developable strips, is described in this paper. This model can be applied to superconducting Rutherford cables, and it is particularly suitable for High Temperature Superconducting tapes. The great advantage of this approach is analytic simplifications in the field computation, allowing for very fast and accurate computations, which save a considerable computational time during the optimization process. Some 3D designs for dipoles are finally proposed, and it is shown that the harmonic integrals can be easily optimized using this model.

  5. Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.

  6. Parameters Optimization for Operational Storm Surge/Tide Forecast Model using a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, W.; You, S.; Ryoo, S.; Global Environment System Research Laboratory

    2010-12-01

    Typhoons generated in northwestern Pacific Ocean annually affect the Korean Peninsula and storm surges generated by strong low pressure and sea winds often cause serious damage to property in the coastal region. To predict storm surges, a lot of researches have been conducted by using numerical models for many years. Various parameters used for calculation of physics process are used in numerical models based on laws of physics, but they are not accurate values. Because those parameters affect to the model performance, these uncertain values can sensitively operate results of the model. Therefore, optimization of these parameters used in numerical model is essential for accurate storm surge predictions. A genetic algorithm (GA) is recently used to estimate optimized values of these parameters. The GA is a stochastic exploration modeling natural phenomenon named genetic heritance and competition for survival. To realize breeding of species and selection, the groups which may be harmed are kept and use genetic operators such as inheritance, mutation, selection and crossover. In this study, we have improved operational storm surge/tide forecast model(STORM) of NIMR/KMA (National Institute of Meteorological Research/Korea Meteorological Administration) that covers 115E - 150E, 20N - 52N based on POM (Princeton Ocean Model) with 8km horizontal resolutions using the GA. Optimized values have been estimated about main 4 parameters which are bottom drag coefficient, background horizontal diffusivity coefficient, Smagoranski’s horizontal viscosity coefficient and sea level pressure scaling coefficient within STORM. These optimized parameters were estimated on typhoon MAEMI in 2003 and 9 typhoons which have affected to Korea peninsula from 2005 to 2007. The 4 estimated parameters were also used to compare one-month predictions in February and August 2008. During the 48h forecast time, the mean and median model accuracies improved by 25 and 51%, respectively.

  7. Reduced order model based on principal component analysis for process simulation and optimization

    SciTech Connect

    Lang, Y.; Malacina, A.; Biegler, L.; Munteanu, S.; Madsen, J.; Zitney, S.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models, this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.

  8. Electrochemical model parameter identification of a lithium-ion battery using particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Rahman, Md Ashiqur; Anwar, Sohel; Izadian, Afshin

    2016-03-01

    In this paper, a gradient-free optimization technique, namely particle swarm optimization (PSO) algorithm, is utilized to identify specific parameters of the electrochemical model of a Lithium-Ion battery with LiCoO2 cathode chemistry. Battery electrochemical model parameters are subject to change under severe or abusive operating conditions resulting in, for example, over-discharged battery, over-charged battery, etc. It is important for a battery management system to have these parameter changes fully captured in a bank of battery models that can be used to monitor battery conditions in real time. Here the PSO methodology has been successfully applied to identify four electrochemical model parameters that exhibit significant variations under severe operating conditions: solid phase diffusion coefficient at the positive electrode (cathode), solid phase diffusion coefficient at the negative electrode (anode), intercalation/de-intercalation reaction rate at the cathode, and intercalation/de-intercalation reaction rate at the anode. The identified model parameters were used to generate the respective battery models for both healthy and degraded batteries. These models were then validated by comparing the model output voltage with the experimental output voltage for the stated operating conditions. The identified Li-Ion battery electrochemical model parameters are within reasonable accuracy as evidenced by the experimental validation results.

  9. Optimal control for a tuberculosis model with undetected cases in Cameroon

    NASA Astrophysics Data System (ADS)

    Moualeu, D. P.; Weiser, M.; Ehrig, R.; Deuflhard, P.

    2015-03-01

    This paper considers the optimal control of tuberculosis through education, diagnosis campaign and chemoprophylaxis of latently infected. A mathematical model which includes important components such as undiagnosed infectious, diagnosed infectious, latently infected and lost-sight infectious is formulated. The model combines a frequency dependent and a density dependent force of infection for TB transmission. Through optimal control theory and numerical simulations, a cost-effective balance of two different intervention methods is obtained. Seeking to minimize the amount of money the government spends when tuberculosis remain endemic in the Cameroonian population, Pontryagin's maximum principle is used to characterize the optimal control. The optimality system is derived and solved numerically using the forward-backward sweep method (FBSM). Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. It comes out that combining chemoprophylaxis and education, the burden of TB can be reduced by 80% in 10 years.

  10. Optimal Experiment Design for Monoexponential Model Fitting: Application to Apparent Diffusion Coefficient Imaging

    PubMed Central

    Alipoor, Mohammad; Maier, Stephan E.; Gu, Irene Yu-Hua; Mehnert, Andrew; Kahl, Fredrik

    2015-01-01

    The monoexponential model is widely used in quantitative biomedical imaging. Notable applications include apparent diffusion coefficient (ADC) imaging and pharmacokinetics. The application of ADC imaging to the detection of malignant tissue has in turn prompted several studies concerning optimal experiment design for monoexponential model fitting. In this paper, we propose a new experiment design method that is based on minimizing the determinant of the covariance matrix of the estimated parameters (D-optimal design). In contrast to previous methods, D-optimal design is independent of the imaged quantities. Applying this method to ADC imaging, we demonstrate its steady performance for the whole range of input variables (imaged parameters, number of measurements, and range of b-values). Using Monte Carlo simulations we show that the D-optimal design outperforms existing experiment design methods in terms of accuracy and precision of the estimated parameters. PMID:26839880

  11. THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL

    SciTech Connect

    Werth, D.; O'Steen, L.

    2008-02-11

    We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.

  12. Optimal transport in truncated models of Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Souza, Andre N.; Doering, Charles R.

    2014-11-01

    We investigate absolute limits on heat transport in a truncated model of Rayleigh-Bénard convection. Two complementary analyses are used to derive upper bounds in an eight model: a background method analysis and an optimal control approach. In the optimal control formulation the flow no longer obeys an equation of motion, but is instead a control variable. The background method and the optimal control approach produce the same estimate. However, in contrast to a simpler system (i.e., the Lorenz equations) the optimizing flow field--which is observed to be time independent--does not correspond to an exact solution of the equations of motion. Supported by NSF Mathematical Physics Award PHY-1205219 with an Alliances for Graduate Education and the Professoriate (AGEP) Graduate Research Supplement.

  13. Optimal Experiment Design for Monoexponential Model Fitting: Application to Apparent Diffusion Coefficient Imaging.

    PubMed

    Alipoor, Mohammad; Maier, Stephan E; Gu, Irene Yu-Hua; Mehnert, Andrew; Kahl, Fredrik

    2015-01-01

    The monoexponential model is widely used in quantitative biomedical imaging. Notable applications include apparent diffusion coefficient (ADC) imaging and pharmacokinetics. The application of ADC imaging to the detection of malignant tissue has in turn prompted several studies concerning optimal experiment design for monoexponential model fitting. In this paper, we propose a new experiment design method that is based on minimizing the determinant of the covariance matrix of the estimated parameters (D-optimal design). In contrast to previous methods, D-optimal design is independent of the imaged quantities. Applying this method to ADC imaging, we demonstrate its steady performance for the whole range of input variables (imaged parameters, number of measurements, and range of b-values). Using Monte Carlo simulations we show that the D-optimal design outperforms existing experiment design methods in terms of accuracy and precision of the estimated parameters.

  14. The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model

    NASA Astrophysics Data System (ADS)

    Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan

    2016-05-01

    Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.

  15. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    SciTech Connect

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  16. Optimization of carbon capture systems using surrogate models of simulated processes.

    SciTech Connect

    Cozad, A.; Chang, Y.; Sahinidis, N.; Miller, D.

    2011-01-01

    With increasing demand placed on power generation plants to reduce carbon dioxide (CO2) emissions, processes to separate and capture CO2 for eventual sequestration are highly sought after. Carbon capture processes impart a parasitic load on the power plants; it is estimated that this would increase the cost of electricity from existing pulverized coal plants anywhere from 71-85 percent [1]. The National Energy and Technology Lab (NETL) is working to lower this to below a 30 percent increase. To reach this goal, work is being done not only to accurately simulate these processes, but also to leverage those accurate and detailed simulations to design optimal carbon capture processes. The major challenges include the lack of accurate algebraic models of the processes, computationally costly simulations, and insufficiently robust simulations. The first challenge bars the use of provable derivative-based optimization algorithms. The latter two can either lead to difficult or impossible direct derivative-free optimization. To overcome these difficulties, we take a more indirect method to solving this problem by, first, generating an accurate set of algebraic surrogate models from the simulation then using derivative-based solvers to optimize the surrogate models. We developed a method that uses derivative-based and derivative-free optimization alongside machine learning and statistical techniques to generate the set of low-complexity surrogate models using data sampled from detailed simulations. The models are validated and improved through the use of derivative-free solvers to adaptively sample new simulation points. The resulting surrogate models can then be used in a superstructure-based process synthesis and solved using derivative-based methods to optimize carbon capture processes.

  17. Particle Swarm Optimization for inverse modeling of solute transp