NASA Technical Reports Server (NTRS)
Cain, A. W.; Paulin, R. E.
1979-01-01
Computerized spares optimization for Space Shuttle Project comprises analytical process for developing spares quantification and budget forecasts. Model, which assesses risk associated with recommended spares quantities, is enconomical way to determine best mix of large number of spare types.
Modeling using optimization routines
NASA Technical Reports Server (NTRS)
Thomas, Theodore
1995-01-01
Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.
Modeling using optimization routines
NASA Technical Reports Server (NTRS)
Thomas, Theodore
1995-01-01
Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.
HOMER® Micropower Optimization Model
Lilienthal, P.
2005-01-01
NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.
Optimization in Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Marsden, Alison L.
2014-01-01
Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.
Optimal Repairman Allocation Models
1976-03-01
DIVISION II Approximations...results for the model are often difficult to obtain. Division II describes three methods for approx- imating and boundary optimal system characteristics...time Markov chain. mmmimii«««« „^^^^^^^^ mm i.wvwi »PI .«ill Jll»!», l "-’•: "^MM- ^■"’^"■^-r"’’^i3:iT"iM""’ ".-i """"- ""’ DIVISION II
Boiler modeling optimizes sootblowing
Piboontum, S.J.; Swift, S.M.; Conrad, R.S.
2005-10-01
Controlling the cleanliness and limiting the fouling and slagging of heat transfer surfaces are absolutely necessary to optimize boiler performance. The traditional way to clean heat-transfer surfaces is by sootblowing using air, steam, or water at regular intervals. But with the advent of fuel-switching strategies, such as switching to PRB coal to reduce a plant's emissions, the control of heating surface cleanliness has become more problematic for many owners of steam generators. Boiler modeling can help solve that problem. The article describes Babcock & Wilcox's Powerclean modeling system which consists of heating surface models that produce real-time cleanliness indexes. The Heat Transfer Manager (HTM) program is the core of the system, which can be used on any make or model of boiler. A case study is described to show how the system was successfully used at the 1,350 MW Unit 2 of the American Electric Power's Rockport Power Plant in Indiana. The unit fires a blend of eastern bituminous and Powder River Basin coal. 5 figs.
NEMO Oceanic Model Optimization
NASA Astrophysics Data System (ADS)
Epicoco, I.; Mocavero, S.; Murli, A.; Aloisio, G.
2012-04-01
NEMO is an oceanic model used by the climate community for stand-alone or coupled experiments. Its parallel implementation, based on MPI, limits the exploitation of the emerging computational infrastructures at peta and exascale, due to the weight of communications. As case study we considered the MFS configuration developed at INGV with a resolution of 1/16° tailored on the Mediterranenan Basin. The work is focused on the analysis of the code on the MareNostrum cluster and on the optimization of critical routines. The first performance analysis of the model aimed at establishing how much the computational performance are influenced by the GPFS file system or the local disks and wich is the best domain decomposition. The results highlight that the exploitation of local disks can reduce the wall clock time up to 40% and that the best performance is achieved with a 2D decomposition when the local domain has a square shape. A deeper performance analysis highlights the obc_rad, dyn_spg and tra_adv routines are the most time consuming routines. The obc_rad implements the evaluation of the open boundaries and it has been the first routine to be optimized. The communication pattern implemented in obc_rad routine has been redesigned. Before the introduction of the optimizations all processes were involved in the communication, but only the processes on the boundaries have the actual data to be exchanged and only the data on the boundaries must be exchanged. Moreover the data along the vertical levels are "packed" and sent with only one MPI_send invocation. The overall efficiency increases compared with the original version, as well as the parallel speed-up. The execution time was reduced of about 33.81%. The second phase of optimization involved the SOR solver routine, implementing the Red-Black Successive-Over-Relaxation method. The high frequency of exchanging data among processes represent the most part of the overall communication time. The number of communication is
Legal Policy Optimizing Models
ERIC Educational Resources Information Center
Nagel, Stuart; Neef, Marian
1977-01-01
The use of mathematical models originally developed by economists and operations researchers is described for legal process research. Situations involving plea bargaining, arraignment, and civil liberties illustrate the applicability of decision theory, inventory modeling, and linear programming in operations research. (LBH)
Pyomo : Python Optimization Modeling Objects.
Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul
2010-11-01
The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.
Risk modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-09-01
Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.
Optimal designs for copula models
Perrone, E.; Müller, W.G.
2016-01-01
Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
Assessing models of optimal diving.
Houston, Alasdair I
2011-06-01
Many birds and mammals forage under water and have to return to the surface to breathe. Models of optimal diving attempt to explain the behaviour of such animals in terms of selection for successful foraging given the constraints imposed by physiology. Several recent papers have questioned the accuracy of both the assumptions and the predictions of these models. Here, I provide a critical review of these papers, arguing that they misrepresent both the models and the data. As a result, they focus on inappropriate tests. I use the debate to suggest various new models and to explore the general relationship between theory and data in behavioural ecology. In particular, I consider the merits of qualitative and quantitative predictions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Branch strategies - Modeling and optimization
NASA Technical Reports Server (NTRS)
Dubey, Pradeep K.; Flynn, Michael J.
1991-01-01
The authors provide a common platform for modeling different schemes for reducing the branch-delay penalty in pipelined processors as well as evaluating the associated increased instruction bandwidth. Their objective is twofold: to develop a model for different approaches to the branch problem and to help select an optimal strategy after taking into account additional i-traffic generated by branch strategies. The model presented provides a flexible tool for comparing different branch strategies in terms of the reduction it offers in average branch delay and also in terms of the associated cost of wasted instruction fetches. This additional criterion turns out to be a valuable consideration in choosing between two strategies that perform almost equally. More importantly, it provides a better insight into the expected overall system performance. Simple compiler-support-based low-implementation-cost strategies can be very effective under certain conditions. An active branch prediction scheme based on loop buffers can be as competitive as a branch-target-buffer based strategy.
How Optimal Is the Optimization Model?
ERIC Educational Resources Information Center
Heine, Bernd
2013-01-01
Pieter Muysken's article on modeling and interpreting language contact phenomena constitutes an important contribution.The approach chosen is a top-down one, building on the author's extensive knowledge of all matters relating to language contact. The paper aims at integrating a wide range of factors and levels of social, cognitive, and…
How Optimal Is the Optimization Model?
ERIC Educational Resources Information Center
Heine, Bernd
2013-01-01
Pieter Muysken's article on modeling and interpreting language contact phenomena constitutes an important contribution.The approach chosen is a top-down one, building on the author's extensive knowledge of all matters relating to language contact. The paper aims at integrating a wide range of factors and levels of social, cognitive, and…
Combat Identification Modeling Using Robust Optimization Techniques
2008-03-01
Monte Carlo Simulation...................................................................................23 Mathematical Frame Work to Optimize CID...System............................................24 Summary of Previous Mathematical Frame Work for ATR.............................24...Mathematical Frame Work for CID Simulation ..............................................27 How CID Modeling is Currently Performed in Combat Models
Optimal Decision Making in Neural Inhibition Models
ERIC Educational Resources Information Center
van Ravenzwaaij, Don; van der Maas, Han L. J.; Wagenmakers, Eric-Jan
2012-01-01
In their influential "Psychological Review" article, Bogacz, Brown, Moehlis, Holmes, and Cohen (2006) discussed optimal decision making as accomplished by the drift diffusion model (DDM). The authors showed that neural inhibition models, such as the leaky competing accumulator model (LCA) and the feedforward inhibition model (FFI), can mimic the…
Optimal Decision Making in Neural Inhibition Models
ERIC Educational Resources Information Center
van Ravenzwaaij, Don; van der Maas, Han L. J.; Wagenmakers, Eric-Jan
2012-01-01
In their influential "Psychological Review" article, Bogacz, Brown, Moehlis, Holmes, and Cohen (2006) discussed optimal decision making as accomplished by the drift diffusion model (DDM). The authors showed that neural inhibition models, such as the leaky competing accumulator model (LCA) and the feedforward inhibition model (FFI), can mimic the…
Optimal Designs for the Rasch Model
ERIC Educational Resources Information Center
Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer
2012-01-01
In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…
Optimal Designs for the Rasch Model
ERIC Educational Resources Information Center
Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer
2012-01-01
In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
A DSN optimal spacecraft scheduling model
NASA Technical Reports Server (NTRS)
Webb, W. A.
1982-01-01
A computer model is described which uses mixed-integer linear programming to provide optimal DSN spacecraft schedules given a mission set and specified scheduling requirements. A solution technique is proposed which uses Bender's Method and a heuristic starting algorithm.
Modelling and Optimizing Mathematics Learning in Children
ERIC Educational Resources Information Center
Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus
2013-01-01
This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…
Modelling and Optimizing Mathematics Learning in Children
ERIC Educational Resources Information Center
Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus
2013-01-01
This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…
Enhanced index tracking modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. S.; Hj. Jaaman, Saiful Hafizah; Ismail, Hamizun bin
2013-09-01
Enhanced index tracking is a popular form of passive fund management in stock market. It is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the risk. Enhanced index tracking aims to generate excess return over the return achieved by the index without purchasing all of the stocks that make up the index by establishing an optimal portfolio. The objective of this study is to determine the optimal portfolio composition and performance by using weighted model in enhanced index tracking. Weighted model focuses on the trade-off between the excess return and the risk. The results of this study show that the optimal portfolio for the weighted model is able to outperform the Malaysia market index which is Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.
Making models match measurements: Model optimization for morphogen patterning networks
Hengenius, JB; Gribskov, MR; Rundell, AE; Umulis, DM
2015-01-01
Mathematical modeling of developmental signaling networks has played an increasingly important role in the identification of regulatory mechanisms by providing a sandbox for hypothesis testing and experiment design. Whether these models consist of an equation with a few parameters or dozens of equations with hundreds of parameters, a prerequisite to model-based discovery is to bring simulated behavior into agreement with observed data via parameter estimation. These parameters provide insight into the system (e.g., enzymatic rate constants describe enzyme properties). Depending on the nature of the model fit desired - from qualitative (relative spatial positions of phosphorylation) to quantitative (exact agreement of spatial position and concentration of gene products) - different measures of data-model mismatch are used to estimate different parameter values, which contain different levels of usable information and/or uncertainty. To facilitate the adoption of modeling as a tool for discovery alongside other tools such as genetics, immunostaining, and biochemistry, careful consideration needs to be given to how well a model fits the available data, what the optimized parameter values mean in a biological context, and how the uncertainty in model parameters and predictions plays into experiment design. The core discussion herein pertains to the quantification of model-to-data agreement, which constitutes the first measure of a model's performance and future utility to the problem at hand. Integration of this experimental data and the appropriate choice of objective measures of data-model agreement will continue to drive modeling forward as a tool that contributes to experimental discovery. The Drosophila melanogaster gap gene system, in which model parameters are optimized against in situ immunofluorescence intensities, demonstrates the importance of error quantification, which is applicable to a wide array of developmental modeling studies. PMID:25016297
Incorporating routing into reservoir planning optimization models
NASA Astrophysics Data System (ADS)
Zmijewski, Nicholas; Wörman, Anders; Bottacin-Busolin, Andrea
2015-04-01
To achieve the best overall operation result in a reservoir network, optimization models are used. For larger reservoir networks the computational cost increases, making simplification of the hydrodynamic description necessary. In-accuracy in flow prediction can be related to an incurred sub-optimality in production planning. Flow behavior in a management optimization model is often described using a constant time-lag model. A simplified hydraulic model was used, describing the stream flow in a reservoir network for short term production planning of a case-study reservoir network (Dalälven River). In this study, the importance of incorporating hydrodynamic wave diffusion for optimized hydropower production planning in a regulated water system was examined, comparing the kinematic-wave model to the constant time-lag. The receding horizon optimization procedure was applied, emulating the data-assimilation procedure present in modern operations. Power production was shown to deviate from the planned production while considering a single time-lag, as a function of the stream description. The simplification of using a constant time-lag could be considered acceptable for streams characterized by high Peclet number. Examining the effect of the effect of the length of the decision time-step demonstrated the importance of high frequency data assimilation for streams characterized by low Peclet numbers. Further, it was shown that the variability in flow becomes more ordered as a result of management and that the Peclet number contributes to that goal.
Optimal combinations of specialized conceptual hydrological models
NASA Astrophysics Data System (ADS)
Kayastha, Nagendra; Lal Shrestha, Durga; Solomatine, Dimitri
2010-05-01
In hydrological modelling it is a usual practice to use a single lumped conceptual model for hydrological simulations at all regimes. However often the simplicity of the modelling paradigm leads to errors in represent all the complexity of the physical processes in the catchment. A solution could be to model various hydrological processes separately by differently parameterized models, and to combine them. Different hydrological models have varying performance in reproducing catchment response. Generally it cannot be represented precisely in different segments of the hydrograph: some models performed well in simulating the peak flows, while others do well in capturing the low flows. Better performance can be achieved if a model being applied to the catchment using different model parameters that are calibrated using criteria favoring high or low flows. In this work we use a modular approach to simulate hydrology of a catchment, wherein multiple models are applied to replicate the catchment responses and each "specialist" model is calibrated according to a specific objective function which is chosen in a way that forces the model to capture certain aspects of the hydrograph, and outputs of models are combined using so-called "fuzzy committee". Such multi-model approach has been already previously implemented in the development of data driven and conceptual models (Fenicia et al., 2007), but its perfomance was considered only during the calibration period. In this study we tested an application to conceptual models in both calibration and verification period. In addition, we tested the sensitivity of the result to the use of different weightings used in the objective functions formulations, and memberbship functions used in the committee. The study was carried out for Bagamati catchment in Nepal and Brue catchment in United Kingdoms with the MATLAB-based implementation of HBV model. Multi-objective evolutionary optimization genetic algorithm (Deb, 2001) was used to
An overview of the optimization modelling applications
NASA Astrophysics Data System (ADS)
Singh, Ajay
2012-10-01
SummaryThe optimal use of available resources is of paramount importance in the backdrop of the increasing food, fiber, and other demands of the burgeoning global population and the shrinking resources. The optimal use of these resources can be determined by employing an optimization technique. The comprehensive reviews on the use of various programming techniques for the solution of different optimization problems have been provided in this paper. The past reviews are grouped into nine sections based on the solutions of the theme-based real world problems. The sections include: use of optimization modelling for conjunctive use planning, groundwater management, seawater intrusion management, irrigation management, achieving optimal cropping pattern, management of reservoir systems operation, management of resources in arid and semi-arid regions, solid waste management, and miscellaneous uses which comprise, managing problems of hydropower generation and sugar industry. Conclusions are drawn where gaps exist and more research needs to be focused.
An optimization model of communications satellite planning
NASA Astrophysics Data System (ADS)
Dutta, Amitava; Rama, Dasaratha V.
1992-09-01
A mathematical planning model is developed to help make cost effective decisions on key physical and operational parameters, for a satellite intended to provide customer premises services (CPS). The major characteristics of the model are: (1) interactions and tradeoffs among technical variables are formally captured; (2) values for capacity and operational parameters are obtained through optimization, greatly reducing the need for heuristic choices of parameter values; (3) effects of physical and regulatory constraints are included; and (4) the effects of market prices for transmission capacity on planning variables are explicitly captured. The model is solved optimally using geometric programming methods. Sensitivity analysis yields coefficients, analogous to shadow prices, that quantitatively indicate the change in objective function value resulting from variations in input parameter values. This helps in determining the robustness of planning decisions and in coping with some of the uncertainty that exists at the planning stage. The model can therefore be useful in making economically viable planning decisions for communications satellites.
Improving Heliospheric Field Models with Optimized Coronal Models
NASA Astrophysics Data System (ADS)
Jones, S. I.; Davila, J. M.; Uritsky, V. M.
2015-12-01
The Solar Orbiter and Solar Probe Plus missions will travel closer to the sun than any previous mission, collecting unprecedented in situ data. This data can provide insight into coronal structure, energy transport, and evolution in the inner heliosphere. However, in order to take full advantage of this data, researchers need quality models of the inner heliosphere to connect the in situ observations to their coronal and photospheric sources. Developing quality models for this region of space has proved difficult, in part because the only part of the field that is accessible for routine measurement is the photosphere. The photospheric field measurements, though somewhat problematic, are used as boundary conditions for coronal models, which often neglect or over-simplify chromospheric conditions, and these coronal models are then used as boundary conditions to drive heliospheric models. The result is a great deal of uncertainty about the accuracy and reliability of the heliospheric models. Here we present a technique we are developing for improving global coronal magnetic field models by optimizing the models to conform to the field morphology observed in coronal images. This agreement between the coronal model and the basic morphology of the corona is essential for creating accurate heliospheric models. We will present results of early tests of two implementations of this idea, and its first application to real-world data.
Improving Vortex Models via Optimal Control Theory
NASA Astrophysics Data System (ADS)
Hemati, Maziar; Eldredge, Jeff; Speyer, Jason
2012-11-01
Flapping wing kinematics, common in biological flight, can allow for agile flight maneuvers. On the other hand, we currently lack sufficiently accurate low-order models that enable such agility in man-made micro air vehicles. Low-order point vortex models have had reasonable success in predicting the qualitative behavior of the aerodynamic forces resulting from such maneuvers. However, these models tend to over-predict the force response when compared to experiments and high-fidelity simulations, in part because they neglect small excursions of separation from the wing's edges. In the present study, we formulate a constrained minimization problem which allows us to relax the usual edge regularity conditions in favor of empirical determination of vortex strengths. The optimal vortex strengths are determined by minimizing the error with respect to empirical force data, while the vortex positions are constrained to evolve according to the impulse matching model developed in previous work. We consider a flat plate undergoing various canonical maneuvers. The optimized model leads to force predictions remarkably close to the empirical data. Additionally, we compare the optimized and original models in an effort to distill appropriate edge conditions for unsteady maneuvers.
Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model
NASA Astrophysics Data System (ADS)
Wiezel, Oren; Or, Yizhar
2016-11-01
Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.
Modeling optimal mineral nutrition for hazelnut micropropagation
USDA-ARS?s Scientific Manuscript database
Micropropagation of hazelnut (Corylus avellana L.) is typically difficult due to the wide variation in response among cultivars. This study was designed to overcome that difficulty by modeling the optimal mineral nutrients for micropropagation of C. avellana selections using a response surface desig...
Optimal Experimental Design for Model Discrimination
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983
Generalized mathematical models in design optimization
NASA Technical Reports Server (NTRS)
Papalambros, Panos Y.; Rao, J. R. Jagannatha
1989-01-01
The theory of optimality conditions of extremal problems can be extended to problems continuously deformed by an input vector. The connection between the sensitivity, well-posedness, stability and approximation of optimization problems is steadily emerging. The authors believe that the important realization here is that the underlying basis of all such work is still the study of point-to-set maps and of small perturbations, yet what has been identified previously as being just related to solution procedures is now being extended to study modeling itself in its own right. Many important studies related to the theoretical issues of parametric programming and large deformation in nonlinear programming have been reported in the last few years, and the challenge now seems to be in devising effective computational tools for solving these generalized design optimization models.
Global Optimization Ensemble Model for Classification Methods
Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab
2014-01-01
Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; Vuffray, Marc; Misra, Sidhant
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithm for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; Vuffray, Marc; Misra, Sidhant
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithm for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less
Image-optimized Coronal Magnetic Field Models
NASA Astrophysics Data System (ADS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-08-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.
Probabilistic computer model of optimal runway turnoffs
NASA Technical Reports Server (NTRS)
Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.
1985-01-01
Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.
Modeling and Global Optimization of DNA separation
Fahrenkopf, Max A.; Ydstie, B. Erik; Mukherjee, Tamal; Schneider, James W.
2014-01-01
We develop a non-convex non-linear programming problem that determines the minimum run time to resolve different lengths of DNA using a gel-free micelle end-labeled free solution electrophoresis separation method. Our optimization framework allows for efficient determination of the utility of different DNA separation platforms and enables the identification of the optimal operating conditions for these DNA separation devices. The non-linear programming problem requires a model for signal spacing and signal width, which is known for many DNA separation methods. As a case study, we show how our approach is used to determine the optimal run conditions for micelle end-labeled free-solution electrophoresis and examine the trade-offs between a single capillary system and a parallel capillary system. Parallel capillaries are shown to only be beneficial for DNA lengths above 230 bases using a polydisperse micelle end-label otherwise single capillaries produce faster separations. PMID:24764606
The trapped fluid transducer: modeling and optimization.
Cheng, Lei; Grosh, Karl
2008-06-01
Exact and approximate formulas for calculating the sensitivity and bandwidth of an electroacoustic transducer with an enclosed or trapped fluid volume are developed. The transducer is composed of a fluid-filled rectangular duct with a tapered-width plate on one wall emulating the biological basilar membrane in the cochlea. A three-dimensional coupled fluid-structure model is developed to calculate the transducer sensitivity by using a boundary integral method. The model is used as the basis of an optimization methodology seeking to enhance the transducer performance. Simplified formulas are derived from the model to estimate the transducer sensitivity and the fundamental resonant frequency with good accuracy and much less computational cost. By using the simplified formulas, one can easily design the geometry of the transducer to achieve the optimal performance. As an example design, the transducer achieves a sensitivity of around -200 dB (1 VmuPa) at 10 kHz frequency range with piezoelectric sensing. In analogy to the cochlea, a tapered-width plate design is considered and shown to have a more uniform frequency response than a similar plate with no taper.
Code Differentiation for Hydrodynamic Model Optimization
Henninger, R.J.; Maudlin, P.J.
1999-06-27
Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.
Modeling, Analysis, and Optimization Issues for Large Space Structures
NASA Technical Reports Server (NTRS)
Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)
1983-01-01
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.
Utilizing computer models for optimizing classroom acoustics
NASA Astrophysics Data System (ADS)
Hinckley, Jennifer M.; Rosenberg, Carl J.
2002-05-01
The acoustical conditions in a classroom play an integral role in establishing an ideal learning environment. Speech intelligibility is dependent on many factors, including speech loudness, room finishes, and background noise levels. The goal of this investigation was to use computer modeling techniques to study the effect of acoustical conditions on speech intelligibility in a classroom. This study focused on a simulated classroom which was generated using the CATT-acoustic computer modeling program. The computer was utilized as an analytical tool in an effort to optimize speech intelligibility in a typical classroom environment. The factors that were focused on were reverberation time, location of absorptive materials, and background noise levels. Speech intelligibility was measured with the Rapid Speech Transmission Index (RASTI) method.
Modeling groundwater vulnerability to pollution using Optimized DRASTIC model
NASA Astrophysics Data System (ADS)
Mogaji, Kehinde Anthony; San Lim, Hwee; Abdullar, Khiruddin
2014-06-01
The prediction accuracy of the conventional DRASTIC model (CDM) algorithm for groundwater vulnerability assessment is severely limited by the inherent subjectivity and uncertainty in the integration of data obtained from various sources. This study attempts to overcome these problems by exploring the potential of the analytic hierarchy process (AHP) technique as a decision support model to optimize the CDM algorithm. The AHP technique was utilized to compute the normalized weights for the seven parameters of the CDM to generate an optimized DRASTIC model (ODM) algorithm. The DRASTIC parameters integrated with the ODM algorithm predicted which among the study areas is more likely to become contaminated as a result of activities at or near the land surface potential. Five vulnerability zones, namely: no vulnerable(NV), very low vulnerable (VLV), low vulnerable (LV), moderate vulnerable (MV) and high vulnerable (HV) were identified based on the vulnerability index values estimated with the ODM algorithm. Results show that more than 50% of the area belongs to both moderate and high vulnerable zones on the account of the spatial analysis of the produced ODM-based groundwater vulnerability prediction map (GVPM).The prediction accuracy of the ODM-based - GVPM with the groundwater pH and manganese (Mn) concentrations established correlation factors (CRs) result of 90 % and 86 % compared to the CRs result of 62 % and 50 % obtained for the validation accuracy of the CDM - based GVPM. The comparative results, indicated that the ODM-based produced GVPM is more reliable than the CDM - based produced GVPM in the study area. The study established the efficacy of AHP as a spatial decision support technique in enhancing environmental decision making with particular reference to future groundwater vulnerability assessment.
Optimal evolution models for quantum tomography
NASA Astrophysics Data System (ADS)
Czerwiński, Artur
2016-02-01
The research presented in this article concerns the stroboscopic approach to quantum tomography, which is an area of science where quantum physics and linear algebra overlap. In this article we introduce the algebraic structure of the parametric-dependent quantum channels for 2-level and 3-level systems such that the generator of evolution corresponding with the Kraus operators has no degenerate eigenvalues. In such cases the index of cyclicity of the generator is equal to 1, which physically means that there exists one observable the measurement of which performed a sufficient number of times at distinct instants provides enough data to reconstruct the initial density matrix and, consequently, the trajectory of the state. The necessary conditions for the parameters and relations between them are introduced. The results presented in this paper seem to have considerable potential applications in experiments due to the fact that one can perform quantum tomography by conducting only one kind of measurement. Therefore, the analyzed evolution models can be considered optimal in the context of quantum tomography. Finally, we introduce some remarks concerning optimal evolution models in the case of n-dimensional Hilbert space.
Designing Sensor Networks by a Generalized Highly Optimized Tolerance Model
NASA Astrophysics Data System (ADS)
Miyano, Takaya; Yamakoshi, Miyuki; Higashino, Sadanori; Tsutsui, Takako
A variant of the highly optimized tolerance model is applied to a toy problem of bioterrorism to determine the optimal arrangement of hypothetical bio-sensors to avert epidemic outbreak. Nonlinear loss function is utilized in searching the optimal structure of the sensor network. The proposed method successfully averts disastrously large events, which can not be achieved by the original highly optimized tolerance model.
Combining Simulation and Optimization Models for Hardwood Lumber Production
G.A. Mendoza; R.J. Meimban; W.G. Luppold; Philip A. Araman
1991-01-01
Published literature contains a number of optimization and simulation models dealing with the primary processing of hardwood and softwood logs. Simulation models have been developed primarily as descriptive models for characterizing the general operations and performance of a sawmill. Optimization models, on the other hand, were developed mainly as analytical tools for...
Application of simulation models for the optimization of business processes
NASA Astrophysics Data System (ADS)
Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří
2016-06-01
The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.
Differentiating a Finite Element Biodegradation Simulation Model for Optimal Control
NASA Astrophysics Data System (ADS)
Minsker, Barbara S.; Shoemaker, Christine A.
1996-01-01
An optimal control model for improving the design of in situ bioremediation of groundwater has been developed. The model uses a finite element biodegradation simulation model called Bio2D to find optimal pumping strategies. Analytical derivatives of the bioremediation finite element model are derived; these derivatives must be computed for the optimal control algorithm. The derivatives are complex and nonlinear; the bulk of the computational effort in solving the optimal control problem is required to calculate the derivatives. An overview of the optimal control and simulation model formulations is also given.
Printer model inversion by constrained optimization
NASA Astrophysics Data System (ADS)
Cholewo, Tomasz J.
1999-12-01
This paper describes a novel method for finding colorant amounts for which a printer will produce a requested color appearance based on constrained optimization. An error function defines the gamut mapping method and black replacement method. The constraints limit the feasible solution region to the device gamut and prevent exceeding the maximum total area coverage. Colorant values corresponding to in-gamut colors are found with precision limited only by the accuracy of the device model. Out-of- gamut colors are mapped to colors within the boundary of the device gamut. This general approach, used in conjunction with different types of color difference equations, can perform a wide range of out-of-gamut mappings such as chroma clipping or for finding colors on gamut boundary having specified properties. We present an application of this method to the creation of PostScript color rendering dictionaries and ICC profiles.
Model Identification for Optimal Diesel Emissions Control
Stevens, Andrew J.; Sun, Yannan; Song, Xiaobo; Parker, Gordon
2013-06-20
In this paper we develop a model based con- troller for diesel emission reduction using system identification methods. Specifically, our method minimizes the downstream readings from a production NOx sensor while injecting a minimal amount of urea upstream. Based on the linear quadratic estimator we derive the closed form solution to a cost function that accounts for the case some of the system inputs are not controllable. Our cost function can also be tuned to trade-off between input usage and output optimization. Our approach performs better than a production controller in simulation. Our NOx conversion efficiency was 92.7% while the production controller achieved 92.4%. For NH3 conversion, our efficiency was 98.7% compared to 88.5% for the production controller.
Optimization Models for Scheduling of Jobs.
Indika, S H Sathish; Shier, Douglas R
2006-01-01
This work is motivated by a particular scheduling problem that is faced by logistics centers that perform aircraft maintenance and modification. Here we concentrate on a single facility (hangar) which is equipped with several work stations (bays). Specifically, a number of jobs have already been scheduled for processing at the facility; the starting times, durations, and work station assignments for these jobs are assumed to be known. We are interested in how best to schedule a number of new jobs that the facility will be processing in the near future. We first develop a mixed integer quadratic programming model (MIQP) for this problem. Since the exact solution of this MIQP formulation is time consuming, we develop a heuristic procedure, based on existing bin packing techniques. This heuristic is further enhanced by application of certain local optimality conditions.
Optimization Models for Scheduling of Jobs
Indika, S. H. Sathish; Shier, Douglas R.
2006-01-01
This work is motivated by a particular scheduling problem that is faced by logistics centers that perform aircraft maintenance and modification. Here we concentrate on a single facility (hangar) which is equipped with several work stations (bays). Specifically, a number of jobs have already been scheduled for processing at the facility; the starting times, durations, and work station assignments for these jobs are assumed to be known. We are interested in how best to schedule a number of new jobs that the facility will be processing in the near future. We first develop a mixed integer quadratic programming model (MIQP) for this problem. Since the exact solution of this MIQP formulation is time consuming, we develop a heuristic procedure, based on existing bin packing techniques. This heuristic is further enhanced by application of certain local optimality conditions. PMID:27274921
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.
Markowitz portfolio optimization model employing fuzzy measure
NASA Astrophysics Data System (ADS)
Ramli, Suhailywati; Jaaman, Saiful Hafizah
2017-04-01
Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.
Modeling and Optimizing RF Multipole Ion Traps
NASA Astrophysics Data System (ADS)
Fanghaenel, Sven; Asvany, Oskar; Schlemmer, Stephan
2016-06-01
Radio frequency (rf) ion traps are very well suited for spectroscopy experiments thanks to the long time storage of the species of interest in a well defined volume. The electrical potential of the ion trap is determined by the geometry of its electrodes and the applied voltages. In order to understand the behavior of trapped ions in realistic multipole traps it is necessary to characterize these trapping potentials. Commercial programs like SIMION or COMSOL, employing the finite difference and/or finite element method, are often used to model the electrical fields of the trap in order to design traps for various purposes, e.g. introducing light from a laser into the trap volume. For a controlled trapping of ions, e.g. for low temperature trapping, the time dependent electrical fields need to be known to high accuracy especially at the minimum of the effective (mechanical) potential. The commercial programs are not optimized for these applications and suffer from a number of limitations. Therefore, in our approach the boundary element method (BEM) has been employed in home-built programs to generate numerical solutions of real trap geometries, e.g. from CAD drawings. In addition the resulting fields are described by appropriate multipole expansions. As a consequence, the quality of a trap can be characterized by a small set of multipole parameters which are used to optimize the trap design. In this presentation a few example calculations will be discussed. In particular the accuracy of the method and the benefits of describing the trapping potentials via multipole expansions will be illustrated. As one important application heating effects of cold ions arising from non-ideal multipole fields can now be understood as a consequence of imperfect field configurations.
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs
NASA Technical Reports Server (NTRS)
Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.
1998-01-01
This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.
Visual prosthesis wireless energy transfer system optimal modeling.
Li, Xueping; Yang, Yuan; Gao, Yong
2014-01-16
Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling's more accuracy for the actual application. The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system's further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application.
Quantitative Modeling and Optimization of Magnetic Tweezers
Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.
2009-01-01
Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664
The Sandpile Model: Optimal Stress and Hormesis
Stark, Martha
2011-01-01
The sandpile model (developed by chaos theorists) is an elegant visual metaphor for the cumulative impact of environmental stressors on complex adaptive systems – an impact that is paradoxical by virtue of the fact that the grains of sand being steadily added to the gradually evolving sandpile are the occasion for both its disruption and its repair. As a result, complex adaptive systems are continuously refashioning themselves at ever-higher levels of complexity and integration – not just in spite of “stressful” input from the outside but by way of it. Stressful input is therefore inherently neither bad (“poison”) nor good (“medication”). Rather, it will be how well the system (be it sandpile or living system) is able to process, integrate, and adapt to the stressful input that will make of it either a growth-disrupting (sandpile-destabilizing) event or a growth-promoting (sandpile-restabilizing) opportunity. Too much stress – “traumatic stress” – will be too overwhelming for the system to manage, triggering instead devastating breakdown. Too little stress will provide too little impetus for transformation and growth, serving instead simply to reinforce the system’s status quo. But just the right amount of stress – “optimal stress” – will provoke recovery by activating the system’s innate capacity to heal itself. PMID:22423229
Integrative systems modeling and multi-objective optimization
This presentation presents a number of algorithms, tools, and methods for utilizing multi-objective optimization within integrated systems modeling frameworks. We first present innovative methods using a genetic algorithm to optimally calibrate the VELMA and SWAT ecohydrological ...
Integrative systems modeling and multi-objective optimization
This presentation presents a number of algorithms, tools, and methods for utilizing multi-objective optimization within integrated systems modeling frameworks. We first present innovative methods using a genetic algorithm to optimally calibrate the VELMA and SWAT ecohydrological ...
Optimal parametrization of electrodynamical battery model using model selection criteria
NASA Astrophysics Data System (ADS)
Suárez-García, Andrés; Alfonsín, Víctor; Urréjola, Santiago; Sánchez, Ángel
2015-07-01
This paper describes the mathematical parametrization of an electrodynamical battery model using different model selection criteria. A good modeling technique is needed by the battery management units in order to increase battery lifetime. The elements of battery models can be mathematically parametrized to enhance their implementation in simulation environments. In this work, the best mathematical parametrizations are selected using three model selection criteria: the coefficient of determination (R2), the Akaike Information Criterion (AIC) and the Bayes Information Criterion (BIC). The R2 criterion only takes into account the error of the mathematical parametrizations, whereas AIC and BIC consider complexity. A commercial 40 Ah lithium iron phosphate (LiFePO4) battery is modeled and then simulated for contrasting. The OpenModelica open-source modeling and simulation environment is used for doing the battery simulations. The mean percent error of the simulations is 0.0985% for the models parametrized with R2 , 0.2300% for the AIC ones, and 0.3756% for the BIC ones. As expected, the R2 selected the most precise, complex and slowest mathematical parametrizations. The AIC criterion chose parametrizations with similar accuracy, but simpler and faster than the R2 ones.
Optimal estimator model for human spatial orientation
NASA Technical Reports Server (NTRS)
Borah, J.; Young, L. R.; Curry, R. E.
1979-01-01
A model is being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli. Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors. Central nervous system function is then modeled as a steady-state Kalman filter which blends information from the various sensors to form an estimate of spatial orientation. Where necessary, this linear central estimator has been augmented with nonlinear elements to reflect more accurately some highly nonlinear human response characteristics. Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended. Possible means are described for extending the model to better represent the active pilot with varying skill and work load levels.
Optimal estimator model for human spatial orientation
NASA Technical Reports Server (NTRS)
Borah, J.; Young, L. R.; Curry, R. E.
1979-01-01
A model is being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli. Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors. Central nervous system function is then modeled as a steady-state Kalman filter which blends information from the various sensors to form an estimate of spatial orientation. Where necessary, this linear central estimator has been augmented with nonlinear elements to reflect more accurately some highly nonlinear human response characteristics. Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended. Possible means are described for extending the model to better represent the active pilot with varying skill and work load levels.
Improved Propulsion Modeling for Low-Thrust Trajectory Optimization
NASA Technical Reports Server (NTRS)
Knittel, Jeremy M.; Englander, Jacob A.; Ozimek, Martin T.; Atchison, Justin A.; Gould, Julian J.
2017-01-01
Low-thrust trajectory design is tightly coupled with spacecraft systems design. In particular, the propulsion and power characteristics of a low-thrust spacecraft are major drivers in the design of the optimal trajectory. Accurate modeling of the power and propulsion behavior is essential for meaningful low-thrust trajectory optimization. In this work, we discuss new techniques to improve the accuracy of propulsion modeling in low-thrust trajectory optimization while maintaining the smooth derivatives that are necessary for a gradient-based optimizer. The resulting model is significantly more realistic than the industry standard and performs well inside an optimizer. A variety of deep-space trajectory examples are presented.
Structural model optimization using statistical evaluation
NASA Technical Reports Server (NTRS)
Collins, J. D.; Hart, G. C.; Gabler, R. T.; Kennedy, B.
1972-01-01
The results of research in applying statistical methods to the problem of structural dynamic system identification are presented. The study is in three parts: a review of previous approaches by other researchers, a development of various linear estimators which might find application, and the design and development of a computer program which uses a Bayesian estimator. The method is tried on two models and is successful where the predicted stiffness matrix is a proper model, e.g., a bending beam is represented by a bending model. Difficulties are encountered when the model concept varies. There is also evidence that nonlinearity must be handled properly to speed the convergence.
Visual prosthesis wireless energy transfer system optimal modeling
2014-01-01
Background Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. Methods On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling’s more accuracy for the actual application. Results The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. Conclusions The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system’s further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application. PMID:24428906
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.
Problem Formulation for Optimal Array Modeling and Planning
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.; Ho, Jeannie
2006-01-01
In this paper we describe an optimal modeling and planning framework for the future large array of DSN antennas. This framework takes into account the array link performance models, reliability models, constrain models, and objective functions, and determines the optimal sub-array clusters configuration that will support the maximum number of concurrent missions based on mission link properties, antenna element reliabilities, mission requests, and array operation constraints. ..
A MILP-Model for the Optimization of Transports
NASA Astrophysics Data System (ADS)
Björk, Kaj-Mikael
2010-09-01
This paper presents a work in developing a mathematical model for the optimization of transports. The decisions to be made are routing decisions, truck assignment and the determination of the pickup order for a set of loads and available trucks. The model presented takes these aspects into account simultaneously. The MILP model is implemented in the Microsoft Excel environment, utilizing the LP-solve freeware as the optimization engine and Visual Basic for Applications as the modeling interface.
Optimal Linking Design for Response Model Parameters
ERIC Educational Resources Information Center
Barrett, Michelle D.; van der Linden, Wim J.
2017-01-01
Linking functions adjust for differences between identifiability restrictions used in different instances of the estimation of item response model parameters. These adjustments are necessary when results from those instances are to be compared. As linking functions are derived from estimated item response model parameters, parameter estimation…
Optimal Scaling of Interaction Effects in Generalized Linear Models
ERIC Educational Resources Information Center
van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.
2009-01-01
Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…
Optimal Scaling of Interaction Effects in Generalized Linear Models
ERIC Educational Resources Information Center
van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.
2009-01-01
Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…
On Optimal Input Design and Model Selection for Communication Channels
Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M
2013-01-01
In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.
Model and method for optimizing heterogeneous systems
NASA Astrophysics Data System (ADS)
Antamoshkin, O. A.; Antamoshkina, O. A.; Zelenkov, P. V.; Kovalev, I. V.
2016-11-01
Methodology of distributed computing performance boost by reduction of delays number is proposed. Concept of n-dimentional requirements triangle is introduced. Dynamic mathematical model of resource use in distributed computing systems is described.
Optimizing mouse models for precision cancer prevention
Le Magnen, Clémentine; Dutta, Aditya; Abate-Shen, Cory
2017-01-01
As cancer has become increasingly more prevalent in our society, cancer prevention research has evolved toward placing a greater emphasis on reducing cancer deaths and minimizing the adverse consequences of having cancer. “Precision cancer prevention” takes into account the collaboration of intrinsic and extrinsic factors for influencing cancer incidence and aggressiveness in the context of the individual, as well as the recognition that such knowledge can improve early detection and more accurate discrimination of cancerous lesions. The premise of this review is that analyses of mouse models can greatly augment precision cancer prevention. However, as of now, mouse models, and particularly genetically-engineered mouse (GEM) models, have yet to be fully integrated into prevention research. Herein we discuss opportunities and challenges for “precision mouse modeling”, including their essential criteria of mouse models for prevention research, representative success stories, and opportunities for the more refined analyses in future studies. PMID:26893066
Stochastic Robust Mathematical Programming Model for Power System Optimization
Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
Improved Modeling of Intelligent Tutoring Systems Using Ant Colony Optimization
ERIC Educational Resources Information Center
Rastegarmoghadam, Mahin; Ziarati, Koorush
2017-01-01
Swarm intelligence approaches, such as ant colony optimization (ACO), are used in adaptive e-learning systems and provide an effective method for finding optimal learning paths based on self-organization. The aim of this paper is to develop an improved modeling of adaptive tutoring systems using ACO. In this model, the learning object is…
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
Mathematical Model For Engineering Analysis And Optimization
NASA Technical Reports Server (NTRS)
Sobieski, Jaroslaw
1992-01-01
Computational support for engineering design process reveals behavior of designed system in response to external stimuli; and finds out how behavior modified by changing physical attributes of system. System-sensitivity analysis combined with extrapolation forms model of design complementary to model of behavior, capable of direct simulation of effects of changes in design variables. Algorithms developed for this method applicable to design of large engineering systems, especially those consisting of several subsystems involving many disciplines.
Stability and optimization in structured population models on graphs.
Colombo, Rinaldo M; Garavello, Mauro
2015-04-01
We prove existence and uniqueness of solutions, continuous dependence from the initial datum and stability with respect to the boundary condition in a class of initial--boundary value problems for systems of balance laws. The particular choice of the boundary condition allows to comprehend models with very different structures. In particular, we consider a juvenile-adult model, the problem of the optimal mating ratio and a model for the optimal management of biological resources. The stability result obtained allows to tackle various optimal management/control problems, providing sufficient conditions for the existence of optimal choices/controls.
In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data
ERIC Educational Resources Information Center
Yi, Yeon-Sook
2017-01-01
This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…
On our best behavior: optimality models in human behavioral ecology.
Driscoll, Catherine
2009-06-01
This paper discusses problems associated with the use of optimality models in human behavioral ecology. Optimality models are used in both human and non-human animal behavioral ecology to test hypotheses about the conditions generating and maintaining behavioral strategies in populations via natural selection. The way optimality models are currently used in behavioral ecology faces significant problems, which are exacerbated by employing the so-called 'phenotypic gambit': that is, the bet that the psychological and inheritance mechanisms responsible for behavioral strategies will be straightforward. I argue that each of several different possible ways we might interpret how optimality models are being used for humans face similar and additional problems. I suggest some ways in which human behavioral ecologists might adjust how they employ optimality models; in particular, I urge the abandonment of the phenotypic gambit in the human case.
Hierarchical models and iterative optimization of hybrid systems
Rasina, Irina V.; Baturina, Olga V.; Nasatueva, Soelma N.
2016-06-08
A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.
Multipurpose optimization models for high level waste vitrification
Hoza, M.
1994-08-01
Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification.
Modeling to optimize terminal stem cell differentiation.
Gallicano, G Ian
2013-01-01
Embryonic stem cell (ESC), iPCs, and adult stem cells (ASCs) all are among the most promising potential treatments for heart failure, spinal cord injury, neurodegenerative diseases, and diabetes. However, considerable uncertainty in the production of ESC-derived terminally differentiated cell types has limited the efficiency of their development. To address this uncertainty, we and other investigators have begun to employ a comprehensive statistical model of ESC differentiation for determining the role of intracellular pathways (e.g., STAT3) in ESC differentiation and determination of germ layer fate. The approach discussed here applies the Baysian statistical model to cell/developmental biology combining traditional flow cytometry methodology and specific morphological observations with advanced statistical and probabilistic modeling and experimental design. The final result of this study is a unique tool and model that enhances the understanding of how and when specific cell fates are determined during differentiation. This model provides a guideline for increasing the production efficiency of therapeutically viable ESCs/iPSCs/ASC derived neurons or any other cell type and will eventually lead to advances in stem cell therapy.
Modeling to Optimize Terminal Stem Cell Differentiation
Gallicano, G. Ian
2013-01-01
Embryonic stem cell (ESC), iPCs, and adult stem cells (ASCs) all are among the most promising potential treatments for heart failure, spinal cord injury, neurodegenerative diseases, and diabetes. However, considerable uncertainty in the production of ESC-derived terminally differentiated cell types has limited the efficiency of their development. To address this uncertainty, we and other investigators have begun to employ a comprehensive statistical model of ESC differentiation for determining the role of intracellular pathways (e.g., STAT3) in ESC differentiation and determination of germ layer fate. The approach discussed here applies the Baysian statistical model to cell/developmental biology combining traditional flow cytometry methodology and specific morphological observations with advanced statistical and probabilistic modeling and experimental design. The final result of this study is a unique tool and model that enhances the understanding of how and when specific cell fates are determined during differentiation. This model provides a guideline for increasing the production efficiency of therapeutically viable ESCs/iPSCs/ASC derived neurons or any other cell type and will eventually lead to advances in stem cell therapy. PMID:24278782
Process Model Construction and Optimization Using Statistical Experimental Design,
1988-04-01
Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George
Optimizing glassy p-spin models.
Thomas, Creighton K; Katzgraber, Helmut G
2011-04-01
Computing the ground state of Ising spin-glass models with p-spin interactions is, in general, an NP-hard problem. In this work we show that unlike in the case of the standard Ising spin glass with two-spin interactions, computing ground states with p=3 is an NP-hard problem even in two space dimensions. Furthermore, we present generic exact and heuristic algorithms for finding ground states of p-spin models with high confidence for systems of up to several thousand spins.
COBRA-SFS modifications and cask model optimization
Rector, D.R.; Michener, T.E.
1989-01-01
Spent-fuel storage systems are complex systems and developing a computational model for one can be a difficult task. The COBRA-SFS computer code provides many capabilities for modeling the details of these systems, but these capabilities can also allow users to specify a more complex model than necessary. This report provides important guidance to users that dramatically reduces the size of the model while maintaining the accuracy of the calculation. A series of model optimization studies was performed, based on the TN-24P spent-fuel storage cask, to determine the optimal model geometry. Expanded modeling capabilities of the code are also described. These include adding fluid shear stress terms and a detailed plenum model. The mathematical models for each code modification are described, along with the associated verification results. 22 refs., 107 figs., 7 tabs.
A novel fluence map optimization model incorporating leaf sequencing constraints.
Jin, Renchao; Min, Zhifang; Song, Enmin; Liu, Hong; Ye, Yinyu
2010-02-21
A novel fluence map optimization model incorporating leaf sequencing constraints is proposed to overcome the drawbacks of the current objective inside smoothing models. Instead of adding a smoothing item to the objective function, we add the total number of monitor unit (TNMU) requirement directly to the constraints which serves as an important factor to balance the fluence map optimization and leaf sequencing optimization process at the same time. Consequently, we formulate the fluence map optimization models for the trailing (left) leaf synchronized, leading (right) leaf synchronized and the interleaf motion constrained non-synchronized leaf sweeping schemes, respectively. In those schemes, the leaves are all swept unidirectionally from left to right. Each of those models is turned into a linear constrained quadratic programming model which can be solved effectively by the interior point method. Those new models are evaluated with two publicly available clinical treatment datasets including a head-neck case and a prostate case. As shown by the empirical results, our models perform much better in comparison with two recently emerged smoothing models (the total variance smoothing model and the quadratic smoothing model). For all three leaf sweeping schemes, our objective dose deviation functions increase much slower than those in the above two smoothing models with respect to the decreasing of the TNMU. While keeping plans in the similar conformity level, our new models gain much better performance on reducing TNMU.
Optimal Combining Data for Improving Ocean Modeling
2009-01-01
estimating the upper ocean velocity field and mixing characteristics such as relative dispersion and finite size Lyapunov exponent , (2) constructing...model with realistic observation characteristics - Application of the above method for filling gaps in HF radar measurements - Developing fusion methods...based on the fuzzy logic [2,3] for estimating Lagrangian characteristics such as absolute and relative dispersion. - Testing the Lagrangian
A Model for Optimal Constrained Adaptive Testing.
ERIC Educational Resources Information Center
van der Linden, Wim J.; Reese, Lynda M.
1998-01-01
Proposes a model for constrained computerized adaptive testing in which the information in the test at the trait level (theta) estimate is maximized subject to the number of possible constraints on the content of the test. Test assembly relies on a linear-programming approach. Illustrates the approach through simulation with items from the Law…
Computational Model Optimization for Enzyme Design Applications
2007-11-02
naturally occurring E. coli chorismate mutase (EcCM) enzyme through computational design. Although the stated milestone of creating a novel... chorismate mutase (CM) was not achieved, the enhancement of the underlying computational model through the development of the two-body PB method will facilitate the future design of novel protein catalysts.
Optimization Model for Reducing Emissions of Greenhouse ...
The EPA Vehicle Greenhouse Gas (VGHG) model is used to apply various technologies to a defined set of vehicles in order to meet a specified GHG emission target, and to then calculate the costs and benefits of doing so. To facilitate its analysis of the costs and benefits of the control of GHG emissions from cars and trucks.
Optimal Experimental Design for Model Discrimination
ERIC Educational Resources Information Center
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…
Geomagnetic field modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
1980-01-01
Data sets selected for mini-batches and the software modifications required for processing these sets are described. Initial analysis was performed on minibatch field model recovery. Studies are being performed to examine the convergence of the solutions and the maximum expansion order the data will support in the constant and secular terms.
Optimal Combining Data for Improving Ocean Modeling
2012-09-30
regional circulation models for accurate estimating the upper ocean velocity field, subsurface thermohaline structure, and mixing characteristics (2... thermohaline patterns and, second, separating space and time variability in glider observations for fast changing thermohaline structures (etc mesoscale fronts...and tested three different procedures. The first one included a parameterization of thermohaline patterns following up an estimation of parameters
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
Pavement maintenance optimization model using Markov Decision Processes
NASA Astrophysics Data System (ADS)
Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.
2017-09-01
This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.
Design Optimization of Coronary Stent Based on Finite Element Models
Qiu, Tianshuang; Zhu, Bao; Wu, Jinying
2013-01-01
This paper presents an effective optimization method using the Kriging surrogate model combing with modified rectangular grid sampling to reduce the stent dogboning effect in the expansion process. An infilling sampling criterion named expected improvement (EI) is used to balance local and global searches in the optimization iteration. Four commonly used finite element models of stent dilation were used to investigate stent dogboning rate. Thrombosis models of three typical shapes are built to test the effectiveness of optimization results. Numerical results show that two finite element models dilated by pressure applied inside the balloon are available, one of which with the artery and plaque can give an optimal stent with better expansion behavior, while the artery and plaque unincluded model is more efficient and takes a smaller amount of computation. PMID:24222743
Model Identification and Optimization for Operational Simulation
2004-08-01
location with known Latitude and Longitude. For the case of Red, one “ dummy instance” of each Force type (unique name) has been created to provide...a UML specification (Booch, et al., 1999) for the domain objects implemented in MATLAB. The domain objects contain all of the problem domain...model shown below is an image of the UML specification in the software design tool, Enterprise Architect (filename af.eap). A free, read-only
Nonlinear model predictive control based on collective neurodynamic optimization.
Yan, Zheng; Wang, Jun
2015-04-01
In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach.
Simple model for predicting microchannel heat sink performance and optimization
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Hsun; Chein, Reiyu
2012-05-01
A simple model was established to predict microchannel heat sink performance based on energy balance. Both hydrodynamically and thermally developed effects were included. Comparisons with the experimental data show that this model provides satisfactory thermal resistance prediction. The model is further extended to carry out geometric optimization on the microchannel heat sink. The results from the simple model are in good agreement as compared with those obtained from three-dimensional simulations.
A generalized flow path model for water distribution optimization
NASA Astrophysics Data System (ADS)
Hsu, N.; Cheng, W.; Yeh, W. W.
2008-12-01
A generalized flow path model is developed for optimizing a water distribution system. The model simultaneously describes a water distribution system in two parts: (1) the water delivery relationships between suppliers and receivers and (2) the physical water delivery system. In the first part, the model considers waters from different suppliers as multiple commodities. This helps the model to clearly describe water deliveries by identifying the relationships between suppliers and receivers. The second part characterizes a physical water distribution network by all possible flow paths. The advantages of the proposed model are that: (1) it is a generalized methodology to optimize water distribution, delivery scheduling, water trade, water transfer, and water exchange under existing reservoir operation rules, contracts, and agreements; (2) it can consider water as multiple commodities if needed; and (3) no simplifications are made for either the physical system or the delivery relationships. The model can be used as a tool for decision making for scheduling optimization. The model optimizes not only the suppliers to each receiver but also their associated flow paths for supplying water. This characteristic leads to the optimum solution that contains the optimal scheduling results and detailed information of water distribution in the physical system. That is, the water right owner, water quantity and its associated flow path of each delivery action are represented explicitly in the results rather than merely an optimized total flow quantity in each arc of a distribution network. The proposed model is first verified by a hypothetical water distribution system. Then, the model is applied to the water distribution system of the Tou-Qian River Basin in northern Taiwan. The results show that the flow path model has the ability to optimize the quantity of each water delivery, the associated flow paths of the delivery, and the strategies of water transfer while considering
Optimality models in the age of experimental evolution and genomics
Bull, J. J.; Wang, I.-N.
2010-01-01
Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimental context with a well-researched organism allows dissection of the evolutionary process to identify causes of model failure – whether the model is wrong about genetics or selection. Second, optimality models provide a meaningful context for the process and mechanics of evolution, and thus may be used to elicit realistic genetic bases of adaptation – an especially useful augmentation to well-researched genetic systems. A few studies of microbes have begun to pioneer this new direction. Incompatibility between the assumed and actual genetics has been demonstrated to be the cause of model failure in some cases. More interestingly, evolution at the phenotypic level has sometimes matched prediction even though the adaptive mutations defy mechanisms established by decades of classic genetic studies. Integration of experimental evolutionary tests with genetics heralds a new wave for optimality models and their extensions that does not merely emphasize the forces driving evolution. PMID:20646132
First-Order Frameworks for Managing Models in Engineering Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natlia M.; Lewis, Robert Michael
2000-01-01
Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.
Modeling, Instrumentation, Automation, and Optimization of Water Resource Recovery Facilities.
Sweeney, Michael W; Kabouris, John C
2016-10-01
A review of the literature published in 2015 on topics relating to water resource recovery facilities (WRRF) in the areas of modeling, automation, measurement and sensors and optimization of wastewater treatment (or water resource reclamation) is presented.
Modeling, Instrumentation, Automation, and Optimization of Water Resource Recovery Facilities.
Sweeney, Michael W; Kabouris, John C
2017-10-01
A review of the literature published in 2016 on topics relating to water resource recovery facilities (WRRF) in the areas of modeling, automation, measurement and sensors and optimization of wastewater treatment (or water resource reclamation) is presented.
Research on web performance optimization principles and models
NASA Astrophysics Data System (ADS)
Wang, Xin
2013-03-01
The Internet high speed development, causes Web the optimized question to be getting more and more prominent, therefore the Web performance optimizes into inevitably. the first principle of Web Performance Optimization is to understand, to know that income will have to pay, and return is diminishing; Simultaneously the probability will decrease Web the performance, and will start from the highest level to optimize obtained biggest. Web Technical models to improve the performance are: sharing costs, high-speed caching, profiles, parallel processing, simplified treatment. Based on this study, given the crucial Web performance optimization recommendations, which improve the performance of Web usage, accelerate the efficient use of Internet has an important significance.
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
Reducing long-term remedial costs by transport modeling optimization.
Becker, David; Minsker, Barbara; Greenwald, Robert; Zhang, Yan; Harre, Karla; Yager, Kathleen; Zheng, Chunmiao; Peralta, Richard
2006-01-01
The Department of Defense (DoD) Environmental Security Technology Certification Program and the Environmental Protection Agency sponsored a project to evaluate the benefits and utility of contaminant transport simulation-optimization algorithms against traditional (trial and error) modeling approaches. Three pump-and-treat facilities operated by the DoD were selected for inclusion in the project. Three optimization formulations were developed for each facility and solved independently by three modeling teams (two using simulation-optimization algorithms and one applying trial-and-error methods). The results clearly indicate that simulation-optimization methods are able to search a wider range of well locations and flow rates and identify better solutions than current trial-and-error approaches. The solutions found were 5% to 50% better than those obtained using trial-and-error (measured using optimal objective function values), with an average improvement of approximately 20%. This translated into potential savings ranging from 600,000 dollars to 10,000,000 dollars for the three sites. In nearly all cases, the cost savings easily outweighed the costs of the optimization. To reduce computational requirements, in some cases the simulation-optimization groups applied multiple mathematical algorithms, solved a series of modified subproblems, and/or fit "meta-models" such as neural networks or regression models to replace time-consuming simulation models in the optimization algorithm. The optimal solutions did not account for the uncertainties inherent in the modeling process. This project illustrates that transport simulation-optimization techniques are practical for real problems. However, applying the techniques in an efficient manner requires expertise and should involve iterative modification to the formulations based on interim results.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform
A Preliminary Ship Design Model for Cargo Throughput Optimization
2014-06-01
aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services , Directorate for...payload, and range that would give the optimal rate of cargo delivery , or throughput, in a given scenario. A physics based mathematical model is...give the optimal rate of cargo delivery , or throughput, in a given scenario. A physics based mathematical model is developed to display the inter
Decision Support Model for Optimal Management of Coastal Gate
NASA Astrophysics Data System (ADS)
Ditthakit, Pakorn; Chittaladakorn, Suwatana
2010-05-01
The coastal areas are intensely settled by human beings owing to their fertility of natural resources. However, at present those areas are facing with water scarcity problems: inadequate water and poor water quality as a result of saltwater intrusion and inappropriate land-use management. To solve these problems, several measures have been exploited. The coastal gate construction is a structural measure widely performed in several countries. This manner requires the plan for suitably operating coastal gates. Coastal gate operation is a complicated task and usually concerns with the management of multiple purposes, which are generally conflicted one another. This paper delineates the methodology and used theories for developing decision support modeling for coastal gate operation scheduling. The developed model was based on coupling simulation and optimization model. The weighting optimization technique based on Differential Evolution (DE) was selected herein for solving multiple objective problems. The hydrodynamic and water quality models were repeatedly invoked during searching the optimal gate operations. In addition, two forecasting models:- Auto Regressive model (AR model) and Harmonic Analysis model (HA model) were applied for forecasting water levels and tide levels, respectively. To demonstrate the applicability of the developed model, it was applied to plan the operations for hypothetical system of Pak Phanang coastal gate system, located in Nakhon Si Thammarat province, southern part of Thailand. It was found that the proposed model could satisfyingly assist decision-makers for operating coastal gates under various environmental, ecological and hydraulic conditions.
Surrogate-Based Optimization of Biogeochemical Transport Models
NASA Astrophysics Data System (ADS)
Prieß, Malte; Slawig, Thomas
2010-09-01
First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.
An Optimality-Based Fully-Distributed Watershed Ecohydrological Model
NASA Astrophysics Data System (ADS)
Chen, L., Jr.
2015-12-01
Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial
Jet Pump Design Optimization by Multi-Surrogate Modeling
NASA Astrophysics Data System (ADS)
Mohan, S.; Samad, A.
2014-09-01
A basic approach to reduce the design and optimization time via surrogate modeling is to select a right type of surrogate model for a particular problem, where the model should have better accuracy and prediction capability. A multi-surrogate approach can protect a designer to select a wrong surrogate having high uncertainty in the optimal zone of the design space. Numerical analysis and optimization of a jet pump via multi-surrogate modeling have been reported in this work. Design variables including area ratio, mixing tube length to diameter ratio and setback ratio were introduced to increase the hydraulic efficiency of the jet pump. Reynolds-averaged Navier-Stokes equations were solved and responses were computed. Among different surrogate models, Sheppard function based surrogate shows better accuracy in data fitting while the radial basis neural network produced highest enhanced efficiency. The efficiency enhancement was due to the reduction of losses in the flow passage.
Jet Pump Design Optimization by Multi-Surrogate Modeling
NASA Astrophysics Data System (ADS)
Mohan, S.; Samad, A.
2015-01-01
A basic approach to reduce the design and optimization time via surrogate modeling is to select a right type of surrogate model for a particular problem, where the model should have better accuracy and prediction capability. A multi-surrogate approach can protect a designer to select a wrong surrogate having high uncertainty in the optimal zone of the design space. Numerical analysis and optimization of a jet pump via multi-surrogate modeling have been reported in this work. Design variables including area ratio, mixing tube length to diameter ratio and setback ratio were introduced to increase the hydraulic efficiency of the jet pump. Reynolds-averaged Navier-Stokes equations were solved and responses were computed. Among different surrogate models, Sheppard function based surrogate shows better accuracy in data fitting while the radial basis neural network produced highest enhanced efficiency. The efficiency enhancement was due to the reduction of losses in the flow passage.
Portfolio optimization for index tracking modelling in Malaysia stock market
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Ismail, Hamizun
2016-06-01
Index tracking is an investment strategy in portfolio management which aims to construct an optimal portfolio to generate similar mean return with the stock market index mean return without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using the optimization model which adopts regression approach in tracking the benchmark stock market index return. In this study, the data consists of weekly price of stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2013. The results of this study show that the optimal portfolio is able to track FBMKLCI Index at minimum tracking error of 1.0027% with 0.0290% excess mean return over the mean return of FBMKLCI Index. The significance of this study is to construct the optimal portfolio using optimization model which adopts regression approach in tracking the stock market index without purchasing all index components.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Towards a hierarchical optimization modeling framework for ...
Background:Bilevel optimization has been recognized as a 2-player Stackelberg game where players are represented as leaders and followers and each pursue their own set of objectives. Hierarchical optimization problems, which are a generalization of bilevel, are especially difficult because the optimization is nested, meaning that the objectives of one level depend on solutions to the other levels. We introduce a hierarchical optimization framework for spatially targeting multiobjective green infrastructure (GI) incentive policies under uncertainties related to policy budget, compliance, and GI effectiveness. We demonstrate the utility of the framework using a hypothetical urban watershed, where the levels are characterized by multiple levels of policy makers (e.g., local, regional, national) and policy followers (e.g., landowners, communities), and objectives include minimization of policy cost, implementation cost, and risk; reduction of combined sewer overflow (CSO) events; and improvement in environmental benefits such as reduced nutrient run-off and water availability. Conclusions: While computationally expensive, this hierarchical optimization framework explicitly simulates the interaction between multiple levels of policy makers (e.g., local, regional, national) and policy followers (e.g., landowners, communities) and is especially useful for constructing and evaluating environmental and ecological policy. Using the framework with a hypothetical urba
Optimization models for degrouping population data.
Bermúdez, Silvia; Blanquero, Rafael
2016-07-01
In certain countries population data are available in grouped form only, usually as quinquennial age groups plus a large open-ended range for the elderly. However, official statistics call for data by individual age since many statistical operations, such as the calculation of demographic indicators, require the use of ungrouped population data. In this paper a number of mathematical models are proposed which, starting from population data given in age groups, enable these ranges to be degrouped into age-specific population values without leaving a fractional part. Unlike other existing procedures for disaggregating demographic data, ours makes it possible to process several years' data simultaneously in a coherent way, and provides accurate results longitudinally as well as transversally. This procedure is also shown to be helpful in dealing with degrouped population data affected by noise, such as those affected by the age-heaping phenomenon.
Optimization of a new mathematical model for bacterial growth
USDA-ARS?s Scientific Manuscript database
The objective of this research is to optimize a new mathematical equation as a primary model to describe the growth of bacteria under constant temperature conditions. An optimization algorithm was used in combination with a numerical (Runge-Kutta) method to solve the differential form of the new gr...
A new sensitivity model with blank space for layout optimization*
NASA Astrophysics Data System (ADS)
Wang, Junping; Wu, Yao; Liu, Shigang; Xing, Runsen
2017-06-01
As the technology scales advancing into the nanometer region, the concept of yield has become an increasingly important design metric. To reduce the yield loss caused by local defects, layout optimization can play a critical role. In this paper, we propose a new open sensitivity-based model with consideration of the blank space around the net, and study the corresponding net optimization. The proposed new model not only has a high practicability in the selection of nets to be optimized but also solves the issue of the increase in short critical area brought during the open optimization,which means to reduce the open critical area with no new short critical area produced, and thereby this model can ensure the decrease of total critical area and finally achieves an integrative optimization. Compared with the models available, the experimental results show that our sensitivity model not only consumes less time with concise algorithm but also can deal with irregular layout, which is out of the scope of other models. At the end of this paper, the effectiveness of the new model is verified by the experiment on the randomly selected five metal layers from the synthesized OpenSparc circuit layout.
Robust and fast nonlinear optimization of diffusion MRI microstructure models.
Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A
2017-07-15
Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of
Minimax D-Optimal Designs for Item Response Theory Models.
ERIC Educational Resources Information Center
Berger, Martjin P. F.; King, C. Y. Joy; Wong, Weng Kee
2000-01-01
Proposed minimax designs for item response theory (IRT) models to overcome the problem of local optimality. Compared minimax designs to sequentially constructed designs for the two parameter logistic model. Results show that minimax designs can be nearly as efficient as sequentially constructed designs. (Author/SLD)
Modeling and optimization of magnetostrictive actuator amplified by compliant mechanism
NASA Astrophysics Data System (ADS)
Niu, Muqing; Yang, Bintang; Yang, Yikun; Meng, Guang
2017-09-01
Magnetostrictive actuators are commonly used in precision engineering with the advantages of high resolution and fast response. Their limited strokes are always amplified by compliant mechanisms without wear and backlash. This paper proposes a hybrid model for the actuation system considering the coupling of the actuator and the amplifier. The magnetostrictive model, based on the Jiles-Atherton model, is related to the input stiffness of the amplifier when quantifying the magneto-mechanical effects, including stress-dependent magnetization, stress-dependent magnetostriction and ΔE effect. The compliant mechanism model aims at constructing the flexibility matrix with the amplification ratio and input stiffness related to the spring factor of the load. The deformation and structural stress of the amplifier are also dependent on the output strain of magnetostrictive material. Experiments under both free load and spring load conditions have been done to verify the effectiveness of the hybrid model. The proposed model is suitable for parameter optimization and the performance indicators can be precisely quantified. Optimization based on hybrid model is more preferred than optimizing the actuator and amplifier independently for maximum output displacement. Furthermore, ‘stiffness match principle’ is no longer applicable when considering ΔE effect, and the optimal external stiffness problem can be numerically solved by the hybrid model for maximum output energy of magnetostrictive material.
Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization
NASA Astrophysics Data System (ADS)
Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane
2003-01-01
The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.
Optimal vaccination and treatment of an epidemic network model
NASA Astrophysics Data System (ADS)
Chen, Lijuan; Sun, Jitao
2014-08-01
In this Letter, we firstly propose an epidemic network model incorporating two controls which are vaccination and treatment. For the constant controls, by using Lyapunov function, global stability of the disease-free equilibrium and the endemic equilibrium of the model is investigated. For the non-constant controls, by using the optimal control strategy, we discuss an optimal strategy to minimize the total number of the infected and the cost associated with vaccination and treatment. Table 1 and Figs. 1-5 are presented to show the global stability and the efficiency of this optimal control.
Optimal Control In Predation Of Models And Mimics
NASA Astrophysics Data System (ADS)
Tsoularis, A.
2007-09-01
This paper examines optimal predation by a predator preying upon two types of prey, modes and mimics. Models are unpalatable prey and mimics are palatable prey resembling the models so as to derive some protection from predation. This biological phenomenon is known in Ecology as Batesian mimicry. An optimal control problem in continuous time is formulated with the sole objective to maximize the net energetic benefit to the predator from predation in the presence of evolving prey populations. The constrained optimal control is bang-bang with the scalar control taken as the probability of attacking prey. Conditions for the existence of singular controls are obtained.
Models for the Optimization of Air Refueling Missions
1993-03-01
Interactive Non-linear Optimizer ) GINO software package applies the GRG method, and can be used to solve most NLPs [18:142]. It is very easy to use and accepts...AFIT/GST!93M-11]D I 1 DTIC ELECTE AD-A262 392 S APR5 19930 /; Models for the Optimization of Air Refueling Missions THESIS Clayton Hugh...GST/93M- 11 Models for the Optimization of Air Refuciing Missions THESIS Presented to the Faculty of the School of Engineering ’of the Air Force
Model updating based on an affine scaling interior optimization algorithm
NASA Astrophysics Data System (ADS)
Zhang, Y. X.; Jia, C. X.; Li, Jian; Spencer, B. F.
2013-11-01
Finite element model updating is usually considered as an optimization process. Affine scaling interior algorithms are powerful optimization algorithms that have been developed over the past few years. A new finite element model updating method based on an affine scaling interior algorithm and a minimization of modal residuals is proposed in this article, and a general finite element model updating program is developed based on the proposed method. The performance of the proposed method is studied through numerical simulation and experimental investigation using the developed program. The results of the numerical simulation verified the validity of the method. Subsequently, the natural frequencies obtained experimentally from a three-dimensional truss model were used to update a finite element model using the developed program. After updating, the natural frequencies of the truss and finite element model matched well.
General model for boring tool optimization
NASA Astrophysics Data System (ADS)
Moraru, G. M.; rbes, M. V. Ze; Popescu, L. G.
2016-08-01
Optimizing a tool (and therefore those for boring) consist in improving its performance through maximizing the objective functions chosen by the designer and/or by user. In order to define and to implement the proposed objective functions, contribute numerous features and performance required by tool users. Incorporation of new features makes the cutting tool to be competitive in the market and to meet user requirements.
Abstract models for the synthesis of optimization algorithms.
NASA Technical Reports Server (NTRS)
Meyer, G. G. L.; Polak, E.
1971-01-01
Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.
An aircraft noise pollution model for trajectory optimization
NASA Technical Reports Server (NTRS)
Barkana, A.; Cook, G.
1976-01-01
A mathematical model describing the generation of aircraft noise is developed with the ultimate purpose of reducing noise (noise-optimizing landing trajectories) in terminal areas. While the model is for a specific aircraft (Boeing 737), the methodology would be applicable to a wide variety of aircraft. The model is used to obtain a footprint on the ground inside of which the noise level is at or above 70 dB.
Optimal ordering policies for continuous review perishable inventory models.
Weiss, H J
1980-01-01
This paper extends the notions of perishable inventory models to the realm of continuous review inventory systems. The traditional perishable inventory costs of ordering, holding, shortage or penalty, disposal and revenue are incorporated into the continuous review framework. The type of policy that is optimal with respect to long run average expected cost is presented for both the backlogging and lost-sales models. In addition, for the lost-sales model the cost function is presented and analyzed.
Optimal schooling formations using a potential flow model
NASA Astrophysics Data System (ADS)
Tchieu, Andrew; Gazzola, Mattia; de Brauer, Alexia; Koumoutsakos, Petros
2012-11-01
A self-propelled, two-dimensional, potential flow model for agent-based swimmers is used to examine how fluid coupling affects schooling formation. The potential flow model accounts for fluid-mediated interactions between swimmers. The model is extended to include individual agent actions by means of modifying the circulation of each swimmer. A reinforcement algorithm is applied to allow the swimmers to learn how to school in specified lattice formations. Lastly, schooling lattice configurations are optimized by combining reinforcement learning and evolutionary optimization to minimize total control effort and energy expenditure.
AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT
NASA Astrophysics Data System (ADS)
Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi
In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.
Darius M. Adams; Ralph J. Alig; J.M. Callaway; Bruce A. McCarl; Steven M. Winnett
1996-01-01
The Forest and Agricultural Sector Optimization Model (FASOM) is a dynamic, nonlinear programming model of the forest and agricultural sectors in the United States. The FASOM model initially was developed to evaluate welfare and market impacts of alternative policies for sequestering carbon in trees but also has been applied to a wider range of forest and agricultural...
A flow path model for regional water distribution optimization
NASA Astrophysics Data System (ADS)
Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.
2009-09-01
We develop a flow path model for the optimization of a regional water distribution system. The model simultaneously describes a water distribution system in two parts: (1) the water delivery relationship between suppliers and receivers and (2) the physical water delivery network. In the first part, the model considers waters from different suppliers as multiple commodities. This helps the model clearly describe water deliveries by identifying the relationship between suppliers and receivers. The physical part characterizes a physical water distribution network by all possible flow paths. The flow path model can be used to optimize not only the suppliers to each receiver but also their associated flow paths for supplying water. This characteristic leads to the optimum solution that contains the optimal scheduling results and detailed information concerning water distribution in the physical system. That is, the water rights owner, water quantity, water location, and associated flow path of each delivery action are represented explicitly in the results rather than merely as an optimized total flow quantity in each arc of a distribution network. We first verify the proposed methodology on a hypothetical water distribution system. Then we apply the methodology to the water distribution system associated with the Tou-Qian River basin in northern Taiwan. The results show that the flow path model can be used to optimize the quantity of each water delivery, the associated flow path, and the water trade and transfer strategy.
Optimization of murine model for Besnoitia caprae.
Oryan, A; Sadoughifar, R; Namavari, M
2016-09-01
It has been shown that mice, particularly the BALB/c ones, are susceptible to infection by some of the apicomplexan parasites. To compare the susceptibility of the inbred BALB/c, outbred BALB/c and C57 BL/6 to Besnoitia caprae inoculation and to determine LD50, 30 male inbred BALB/c, 30 outbred BALB/c and 30 C57 BL/6 mice were assigned into 18 groups of 5 mice. Each group was inoculated intraperitoneally with 12.5 × 10(3), 25 × 10(3), 5 × 10(4), 1 × 10(5), 2 × 10(5) tachyzoites and a control inoculum of DMEM, respectively. The inbred BALB/c was found the most susceptible strain among the experienced mice strains so the LD50 per inbred BALB/c mouse was calculated as 12.5 × 10(3.6) tachyzoites while the LD50 for the outbred BALB/c and C57 BL/6 was 25 × 10(3.4) and 5 × 10(4) tachyzoites per mouse, respectively. To investigate the impact of different routes of inoculation in the most susceptible mice strain, another seventy five male inbred BALB/c mice were inoculated with 2 × 10(5) tachyzoites of B. caprae via various inoculation routes including: subcutaneous, intramuscular, intraperitoneal, infraorbital and oral. All the mice in the oral and infraorbital groups survived for 60 days, whereas the IM group showed quicker death and more severe pathologic lesions, which was then followed by SC and IP groups. Therefore, BALB/c mouse is a proper laboratory model and IM inoculation is an ideal method in besnoitiosis induction and a candidate in treatment, prevention and testing the efficacy of vaccines for besnoitiosis.
Large-scale spherical fixed bed reactors: Modeling and optimization
Hartig, F.; Keil, F.J. )
1993-03-01
Iterative dynamic programming (IDP) according to Luus was used for the optimization of the methanol production in a cascade of spherical reactors. The system of three spherical reactors was compared to an externally cooled tubular reactor and a quench reactor. The reactors were modeled by the pseudohomogeneous and heterogeneous approach. The effectiveness factors of the heterogeneous model were calculated by the dusty gas model. The IDP method was compared with sequential quadratic programming (SQP) and the Box complex method. The optimized distributions of catalyst volume with the pseudohomogeneous and heterogeneous model lead to different results. The IDP method finds the global optimum with high probability. A combination of IDP and SQP provides a reliable optimization procedure that needs minimum computing time.
Impulsive optimal control model for the trajectory of horizontal wells
NASA Astrophysics Data System (ADS)
Li, An; Feng, Enmin; Wang, Lei
2009-01-01
This paper presents an impulsive optimal control model for solving the optimal designing problem of the trajectory of horizontal wells. We take fully into account the effect of unknown disturbances in drilling. The optimal control problem can be converted into a nonlinear parametric optimization by integrating the state equation. We discuss here that the locally optimal solution depends in a continuous way on the parameters (disturbances) and utilize this property to propose a revised Hooke-Jeeves algorithm. The uniform design technique is incorporated into the revised Hooke-Jeeves algorithm to handle the multimodal objective function. The numerical simulation is in accordance with theoretical results. The numerical results illustrate the validity of the model and efficiency of the algorithm.
Optimizing Experimental Design for Comparing Models of Brain Function
Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas
2011-01-01
This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485
Modeling to Optimize Hospital Evacuation Planning in EMS Systems.
Bish, Douglas R; Tarhini, Hussein; Amara, Roel; Zoraster, Richard; Bosson, Nichole; Gausche-Hill, Marianne
2017-01-01
To develop optimal hospital evacuation plans within a large urban EMS system using a novel evacuation planning model and a realistic hospital evacuation scenario, and to illustrate the ways in which a decision support model may be useful in evacuation planning. An optimization model was used to produce detailed evacuation plans given the number and type of patients in the evacuating hospital, resource levels (teams to move patients, vehicles, and beds at other hospitals), and evacuation rules. Optimal evacuation plans under various resource levels and rules were developed and high-level metrics were calculated, including evacuation duration and the utilization of resources. Using this model we were able to determine the limiting resources and demonstrate how strategically augmenting the resource levels can improve the performance of the evacuation plan. The model allowed the planner to test various evacuation conditions and resource levels to demonstrate the effect on performance of the evacuation plan. We present a hospital evacuation planning analysis for a hospital in a large urban EMS system using an optimization model. This model can be used by EMS administrators and medical directors to guide planning decisions and provide a better understanding of various resource allocation decisions and rules that govern a hospital evacuation.
Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.
Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina
2016-08-25
The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
Optimization and analysis of a CFJ-airfoil using adaptive meta-model based design optimization
NASA Astrophysics Data System (ADS)
Whitlock, Michael D.
Although strong potential for Co-Flow Jet (CFJ) flow separation control system has been demonstrated in existing literature, there has been little effort applied towards the optimization of the design for a given application. The high dimensional design space makes any optimization computationally intensive. This work presents the optimization of a CFJ airfoil as applied to a low Reynolds Number regimen using meta-model based design optimization (MBDO). The approach consists of computational fluid dynamics (CFD) analysis coupled with a surrogate model derived using Kriging. A genetic algorithm (GA) is then used to perform optimization on the efficient surrogate model. MBDO was shown to be an effective and efficient approach to solving the CFJ design problem. The final solution set was found to decrease drag by 100% while increasing lift by 42%. When validated, the final solution was found to be within one standard deviation of the CFD model it was representing.
Modeling urban air pollution with optimized hierarchical fuzzy inference system.
Tashayo, Behnam; Alimohammadi, Abbas
2016-10-01
Environmental exposure assessments (EEA) and epidemiological studies require urban air pollution models with appropriate spatial and temporal resolutions. Uncertain available data and inflexible models can limit air pollution modeling techniques, particularly in under developing countries. This paper develops a hierarchical fuzzy inference system (HFIS) to model air pollution under different land use, transportation, and meteorological conditions. To improve performance, the system treats the issue as a large-scale and high-dimensional problem and develops the proposed model using a three-step approach. In the first step, a geospatial information system (GIS) and probabilistic methods are used to preprocess the data. In the second step, a hierarchical structure is generated based on the problem. In the third step, the accuracy and complexity of the model are simultaneously optimized with a multiple objective particle swarm optimization (MOPSO) algorithm. We examine the capabilities of the proposed model for predicting daily and annual mean PM2.5 and NO2 and compare the accuracy of the results with representative models from existing literature. The benefits provided by the model features, including probabilistic preprocessing, multi-objective optimization, and hierarchical structure, are precisely evaluated by comparing five different consecutive models in terms of accuracy and complexity criteria. Fivefold cross validation is used to assess the performance of the generated models. The respective average RMSEs and coefficients of determination (R (2)) for the test datasets using proposed model are as follows: daily PM2.5 = (8.13, 0.78), annual mean PM2.5 = (4.96, 0.80), daily NO2 = (5.63, 0.79), and annual mean NO2 = (2.89, 0.83). The obtained results demonstrate that the developed hierarchical fuzzy inference system can be utilized for modeling air pollution in EEA and epidemiological studies.
Block-oriented modeling of superstructure optimization problems
Friedman, Z; Ingalls, J; Siirola, JD; Watson, JP
2013-10-15
We present a novel software framework for modeling large-scale engineered systems as mathematical optimization problems. A key motivating feature in such systems is their hierarchical, highly structured topology. Existing mathematical optimization modeling environments do not facilitate the natural expression and manipulation of hierarchically structured systems. Rather, the modeler is forced to "flatten" the system description, hiding structure that may be exploited by solvers, and obfuscating the system that the modeling environment is attempting to represent. To correct this deficiency, we propose a Python-based "block-oriented" modeling approach for representing the discrete components within the system. Our approach is an extension of the Pyomo library for specifying mathematical optimization problems. Through the use of a modeling components library, the block-oriented approach facilitates a clean separation of system superstructure from the details of individual components. This approach also naturally lends itself to expressing design and operational decisions as disjunctive expressions over the component blocks. By expressing a mathematical optimization problem in a block-oriented manner, inherent structure (e.g., multiple scenarios) is preserved for potential exploitation by solvers. In particular, we show that block-structured mathematical optimization problems can be straightforwardly manipulated by decomposition-based multi-scenario algorithmic strategies, specifically in the context of the PySP stochastic programming library. We illustrate our block-oriented modeling approach using a case study drawn from the electricity grid operations domain: unit commitment with transmission switching and N - 1 reliability constraints. Finally, we demonstrate that the overhead associated with block-oriented modeling only minimally increases model instantiation times, and need not adversely impact solver behavior. (C) 2013 Elsevier Ltd. All rights reserved.
A revised model of fluid transport optimization in Physarum polycephalum.
Bonifaci, Vincenzo
2017-02-01
Optimization of fluid transport in the slime mold Physarum polycephalum has been the subject of several modeling efforts in recent literature. Existing models assume that the tube adaptation mechanism in P. polycephalum's tubular network is controlled by the sheer amount of fluid flow through the tubes. We put forward the hypothesis that the controlling variable may instead be the flow's pressure gradient along the tube. We carry out the stability analysis of such a revised mathematical model for a parallel-edge network, proving that the revised model supports the global flow-optimizing behavior of the slime mold for a substantially wider class of response functions compared to previous models. Simulations also suggest that the same conclusion may be valid for arbitrary network topologies.
Optimization models for flight test scheduling
NASA Astrophysics Data System (ADS)
Holian, Derreck
As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated
Multiobjective muffler shape optimization with hybrid acoustics modeling.
Airaksinen, Tuomas; Heikkola, Erkki
2011-09-01
This paper considers the combined use of a hybrid numerical method for the modeling of acoustic mufflers and a genetic algorithm for multiobjective optimization. The hybrid numerical method provides accurate modeling of sound propagation in uniform waveguides with non-uniform obstructions. It is based on coupling a wave based modal solution in the uniform sections of the waveguide to a finite element solution in the non-uniform component. Finite element method provides flexible modeling of complicated geometries, varying material parameters, and boundary conditions, while the wave based solution leads to accurate treatment of non-reflecting boundaries and straightforward computation of the transmission loss (TL) of the muffler. The goal of optimization is to maximize TL at multiple frequency ranges simultaneously by adjusting chosen shape parameters of the muffler. This task is formulated as a multiobjective optimization problem with the objectives depending on the solution of the simulation model. NSGA-II genetic algorithm is used for solving the multiobjective optimization problem. Genetic algorithms can be easily combined with different simulation methods, and they are not sensitive to the smoothness properties of the objective functions. Numerical experiments demonstrate the accuracy and feasibility of the model-based optimization method in muffler design.
Optimal control of information epidemics modeled as Maki Thompson rumors
NASA Astrophysics Data System (ADS)
Kandhway, Kundan; Kuri, Joy
2014-12-01
We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.
Hydro- abrasive jet machining modeling for computer control and optimization
NASA Astrophysics Data System (ADS)
Groppetti, R.; Jovane, F.
1993-06-01
Use of hydro-abrasive jet machining (HAJM) for machining a wide variety of materials—metals, poly-mers, ceramics, fiber-reinforced composites, metal-matrix composites, and bonded or hybridized mate-rials—primarily for two- and three-dimensional cutting and also for drilling, turning, milling, and deburring, has been reported. However, the potential of this innovative process has not been explored fully. This article discusses process control, integration, and optimization of HAJM to establish a plat-form for the implementation of real-time adaptive control constraint (ACC), adaptive control optimiza-tion (ACO), and CAD/CAM integration. It presents the approach followed and the main results obtained during the development, implementation, automation, and integration of a HAJM cell and its computer-ized controller. After a critical analysis of the process variables and models reported in the literature to identify process variables and to define a process model suitable for HAJM real-time control and optimi-zation, to correlate process variables and parameters with machining results, and to avoid expensive and time-consuming experiments for determination of the optimal machining conditions, a process predic-tion and optimization model was identified and implemented. Then, the configuration of the HAJM cell, architecture, and multiprogramming operation of the controller in terms of monitoring, control, process result prediction, and process condition optimization were analyzed. This prediction and optimization model for selection of optimal machining conditions using multi-objective programming was analyzed. Based on the definition of an economy function and a productivity function, with suitable constraints relevant to required machining quality, required kerfing depth, and available resources, the model was applied to test cases based on experimental results.
Optimization routine for identification of model parameters in soil plasticity
NASA Astrophysics Data System (ADS)
Mattsson, Hans; Klisinski, Marek; Axelsson, Kennet
2001-04-01
The paper presents an optimization routine especially developed for the identification of model parameters in soil plasticity on the basis of different soil tests. Main focus is put on the mathematical aspects and the experience from application of this optimization routine. Mathematically, for the optimization, an objective function and a search strategy are needed. Some alternative expressions for the objective function are formulated. They capture the overall soil behaviour and can be used in a simultaneous optimization against several laboratory tests. Two different search strategies, Rosenbrock's method and the Simplex method, both belonging to the category of direct search methods, are utilized in the routine. Direct search methods have generally proved to be reliable and their relative simplicity make them quite easy to program into workable codes. The Rosenbrock and simplex methods are modified to make the search strategies as efficient and user-friendly as possible for the type of optimization problem addressed here. Since these search strategies are of a heuristic nature, which makes it difficult (or even impossible) to analyse their performance in a theoretical way, representative optimization examples against both simulated experimental results as well as performed triaxial tests are presented to show the efficiency of the optimization routine. From these examples, it has been concluded that the optimization routine is able to locate a minimum with a good accuracy, fast enough to be a very useful tool for identification of model parameters in soil plasticity.
The effect of model uncertainty on some optimal routing problems
NASA Technical Reports Server (NTRS)
Mohanty, Bibhu; Cassandras, Christos G.
1991-01-01
The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.
A dynamic optimization model for solid waste recycling.
Anghinolfi, Davide; Paolucci, Massimo; Robba, Michela; Taramasso, Angela Celeste
2013-02-01
Recycling is an important part of waste management (that includes different kinds of issues: environmental, technological, economic, legislative, social, etc.). Differently from many works in literature, this paper is focused on recycling management and on the dynamic optimization of materials collection. The developed dynamic decision model is characterized by state variables, corresponding to the quantity of waste in each bin per each day, and control variables determining the quantity of material that is collected in the area each day and the routes for collecting vehicles. The objective function minimizes the sum of costs minus benefits. The developed decision model is integrated in a GIS-based Decision Support System (DSS). A case study related to the Cogoleto municipality is presented to show the effectiveness of the proposed model. From optimal results, it has been found that the net benefits of the optimized collection are about 2.5 times greater than the estimated current policy.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Integer programming model for optimizing bus timetable using genetic algorithm
NASA Astrophysics Data System (ADS)
Wihartiko, F. D.; Buono, A.; Silalahi, B. P.
2017-01-01
Bus timetable gave an information for passengers to ensure the availability of bus services. Timetable optimal condition happened when bus trips frequency could adapt and suit with passenger demand. In the peak time, the number of bus trips would be larger than the off-peak time. If the number of bus trips were more frequent than the optimal condition, it would make a high operating cost for bus operator. Conversely, if the number of trip was less than optimal condition, it would make a bad quality service for passengers. In this paper, the bus timetabling problem would be solved by integer programming model with modified genetic algorithm. Modification was placed in the chromosomes design, initial population recovery technique, chromosomes reconstruction and chromosomes extermination on specific generation. The result of this model gave the optimal solution with accuracy 99.1%.
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Optimizing Computing Platforms for Climate-Driven Ecological Forecasting Models
NASA Astrophysics Data System (ADS)
Farley, S. S.; Williams, J. W.
2016-12-01
Species distribution models are widely used, climate-driven ecological forecasting tools that use machine-learning techniques to predict species range shifts and ecological responses to 21st century climate change. As high-resolution modern and fossil biodiversity data becomes increasingly available and statistical learning methods become more computationally intensive, choosing the correct computing configuration on which to run these models becomes more important. With a variety of low-cost cloud and desktop computing options available, users of forecasting models must balance performance gains achieved by provisioning more powerful hardware with the cost of using these resources. We present a framework for estimating the optimal computing solution for a given modeling activity. We argue that this framework is capable of identifying the optimal computing solution - the one that maximizes model accuracy while minimizing resource cost and computing time. Our framework is built on constituent models of algorithm execution time, predictive skill, and computing cost. We demonstrate the results of the framework using four leading species distribution models: multivariate adaptive regression splines, generalized additive models, support vector machines, and boosted regression trees. The constituent models themselves are shown to have high predictive accuracy, and can be used independently to estimate the effects of using larger input datasets, such as those that incorporate data from the fossil record. When used together, our framework shows highly significant predictive ability, and is designed to be used by researchers to inform future computing provisioning strategies.
An uncertain multidisciplinary design optimization method using interval convex models
NASA Astrophysics Data System (ADS)
Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong
2013-06-01
This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.
Correlations in state space can cause sub-optimal adaptation of optimal feedback control models.
Aprasoff, Jonathan; Donchin, Opher
2012-04-01
Control of our movements is apparently facilitated by an adaptive internal model in the cerebellum. It was long thought that this internal model implemented an adaptive inverse model and generated motor commands, but recently many reject that idea in favor of a forward model hypothesis. In theory, the forward model predicts upcoming state during reaching movements so the motor cortex can generate appropriate motor commands. Recent computational models of this process rely on the optimal feedback control (OFC) framework of control theory. OFC is a powerful tool for describing motor control, it does not describe adaptation. Some assume that adaptation of the forward model alone could explain motor adaptation, but this is widely understood to be overly simplistic. However, an adaptive optimal controller is difficult to implement. A reasonable alternative is to allow forward model adaptation to 're-tune' the controller. Our simulations show that, as expected, forward model adaptation alone does not produce optimal trajectories during reaching movements perturbed by force fields. However, they also show that re-optimizing the controller from the forward model can be sub-optimal. This is because, in a system with state correlations or redundancies, accurate prediction requires different information than optimal control. We find that adding noise to the movements that matches noise found in human data is enough to overcome this problem. However, since the state space for control of real movements is far more complex than in our simple simulations, the effects of correlations on re-adaptation of the controller from the forward model cannot be overlooked.
Optimal policies for a finite-horizon batching inventory model
NASA Astrophysics Data System (ADS)
Al-Khamis, Talal M.; Benkherouf, Lakdere; Omar, Mohamed
2014-10-01
This paper is concerned with finding an optimal inventory policy for the integrated replenishment-production batching model of Omar and Smith (2002). Here, a company produces a single finished product which requires a single raw material and the objective is to minimise the total inventory costs over a finite planning horizon. Earlier work in the literature considered models with linear demand rate function of the finished product. This work proposes a general methodology for finding an optimal inventory policy for general demand rate functions. The proposed methodology is adapted from the recent work of Benkherouf and Gilding (2009).
The optimal inventory policy for EPQ model under trade credit
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2010-09-01
Huang and Huang [(2008), 'Optimal Inventory Replenishment Policy for the EPQ Model Under Trade Credit without Derivatives International Journal of Systems Science, 39, 539-546] use the algebraic method to determine the optimal inventory replenishment policy for the retailer in the extended model under trade credit. However, the algebraic method has its limit of application such that validities of proofs of Theorems 1-4 in Huang and Huang (2008) are questionable. The main purpose of this article is not only to indicate shortcomings but also to present the accurate proofs for Huang and Huang (2008).
Optimal control design that accounts for model mismatch errors
Kim, T.J.; Hull, D.G.
1995-02-01
A new technique is presented in this paper that reduces the complexity of state differential equations while accounting for modeling assumptions. The mismatch controls are defined as the differences between the model equations and the true state equations. The performance index of the optimal control problem is formulated with a set of tuning parameters that are user-selected to tune the control solution in order to achieve the best results. Computer simulations demonstrate that the tuned control law outperforms the untuned controller and produces results that are comparable to a numerically-determined, piecewise-linear optimal controller.
Spectral optimization and uncertainty quantification in combustion modeling
NASA Astrophysics Data System (ADS)
Sheen, David Allan
Reliable simulations of reacting flow systems require a well-characterized, detailed chemical model as a foundation. Accuracy of such a model can be assured, in principle, by a multi-parameter optimization against a set of experimental data. However, the inherent uncertainties in the rate evaluations and experimental data leave a model still characterized by some finite kinetic rate parameter space. Without a careful analysis of how this uncertainty space propagates into the model's predictions, those predictions can at best be trusted only qualitatively. In this work, the Method of Uncertainty Minimization using Polynomial Chaos Expansions is proposed to quantify these uncertainties. In this method, the uncertainty in the rate parameters of the as-compiled model is quantified. Then, the model is subjected to a rigorous multi-parameter optimization, as well as a consistency-screening process. Lastly, the uncertainty of the optimized model is calculated using an inverse spectral optimization technique, and then propagated into a range of simulation conditions. An as-compiled, detailed H2/CO/C1-C4 kinetic model is combined with a set of ethylene combustion data to serve as an example. The idea that the hydrocarbon oxidation model should be understood and developed in a hierarchical fashion has been a major driving force in kinetics research for decades. How this hierarchical strategy works at a quantitative level, however, has never been addressed. In this work, we use ethylene and propane combustion as examples and explore the question of hierarchical model development quantitatively. The Method of Uncertainty Minimization using Polynomial Chaos Expansions is utilized to quantify the amount of information that a particular combustion experiment, and thereby each data set, contributes to the model. This knowledge is applied to explore the relationships among the combustion chemistry of hydrogen/carbon monoxide, ethylene, and larger alkanes. Frequently, new data will
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.; Jacobsen, S. E.
1986-01-01
An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.
Shell model of optimal passive-scalar mixing
NASA Astrophysics Data System (ADS)
Miles, Christopher; Doering, Charles
2015-11-01
Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.
Data visualization optimization via computational modeling of perception.
Pineo, Daniel; Ware, Colin
2012-02-01
We present a method for automatically evaluating and optimizing visualizations using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and primary visual cortex. The neural activity resulting from viewing flow visualizations is simulated and evaluated to produce a metric of visualization effectiveness. Visualization optimization is achieved by applying this effectiveness metric as the utility function in a hill-climbing algorithm. We apply this method to the evaluation and optimization of 2D flow visualizations, using two visualization parameterizations: streaklet-based and pixel-based. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment. It had been previously hypothesized the effectiveness of head-to-tail alignment results from the perceptual processing of the visual system, but this theory had not been computationally modeled. A second optimization using a pixel-based parameterization resulted in a LIC-like result. The implications in terms of the selection of primitives is discussed. We argue that computational models can be used for optimizing complex visualizations. In addition, we argue that they can provide a means of computationally evaluating perceptual theories of visualization, and as a method for quality control of display methods.
Modeling and optimization of a semiregenerative catalytic naphtha reformer
Taskar, U.; Riggs, J.B.
1997-03-01
Modeling and optimization of a semiregenerative catalytic naphtha reformer has been carried out considering most of its key constituent units. A detailed kinetic scheme involving 35 pseudocomponents connected by a network of 36 reactions in the C{sub 5}-C{sub 10} range was modeled using Hougen-Watson Langmuir-Hinshelwood-type reaction-rate expressions. Deactivation of the catalyst was modeled by including the corresponding equations for coking kinetics. The overall kinetic model was parameterized by bench-marking against industrial plant data using a feed-characterization procedure developed to infer the composition of the chemical species in the feed and reformate from their measured ASTM distillation data. For the initial optimization studies, a constant reactor inlet temperature configuration that would lead to optimum operation over the entire catalyst life cycle was identified. The analysis was extended to study the time-optimal control profiles of decision variables over the run length. In addition, the constant octane case was also studied. The improvement in the objective function achieved in each case was determined. Finally, the sensitivity of the optimal results to uncertainty in reactor-model parameters was evaluated.
Serial correlation in optimal design for nonlinear mixed effects models.
Nyberg, Joakim; Höglund, Richard; Bergstrand, Martin; Karlsson, Mats O; Hooker, Andrew C
2012-06-01
In population modeling two sources of variability are commonly included; inter individual variability and residual variability. Rich sampling optimal design (more samples than model parameters) using these models will often result in a sampling schedule where some measurements are taken at exactly the same time point, thereby maximizing the signal-to-noise ratio. This behavior is a result of not appropriately taking into account error generation mechanisms and is often clinically unappealing and may be avoided by including intrinsic variability, i.e. serially correlated residual errors. In this paper we extend previous work that investigated optimal designs of population models including serial correlation using stochastic differential equations to optimal design with the more robust, and analytic, AR(1) autocorrelation model. Further, we investigate the importance of correlation strength, design criteria and robust designs. Finally, we explore the optimal design properties when estimating parameters with and without serial correlation. In the investigated examples the designs and estimation performance differs significantly when handling serial correlation.
Optimization Method for Solution Model of Laser Tracker Multilateration Measurement
NASA Astrophysics Data System (ADS)
Chen, Hongfang; Tan, Zhi; Shi, Zhaoyao; Song, Huixu; Yan, Hao
2016-08-01
Multilateration measurement using laser trackers suffers from a cumbersome solution method for high-precision measurements. Errors are induced by the self-calibration routines of the laser tracker software. This paper describes an optimization solution model for laser tracker multilateration measurement, which effectively inhibits the negative effect of this self-calibration, and further, analyzes the accuracy of the singular value decomposition for the described solution model. Experimental verification for the solution model based on laser tracker and coordinate measuring machine (CMM) was performed. The experiment results show that the described optimization model for laser tracker multilateration measurement has good accuracy control, and has potentially broad application in the field of laser tracker spatial localization.
Applied topology optimization of vibro-acoustic hearing instrument models
NASA Astrophysics Data System (ADS)
Søndergaard, Morten Birkmose; Pedersen, Claus B. W.
2014-02-01
Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.
Turbulence Model Discovery with Data-Driven Learning and Optimization
NASA Astrophysics Data System (ADS)
King, Ryan; Hamlington, Peter
2016-11-01
Data-driven techniques have emerged as a useful tool for model development in applications where first-principles approaches are intractable. In this talk, data-driven multi-task learning techniques are used to discover flow-specific optimal turbulence closure models. We use the recently introduced autonomic closure technique to pose an online supervised learning problem created by test filtering turbulent flows in the self-similar inertial range. The autonomic closure is modified to solve the learning problem for all stress components simultaneously with multi-task learning techniques. The closure is further augmented with a feature extraction step that learns a set of orthogonal modes that are optimal at predicting the turbulent stresses. We demonstrate that these modes can be severely truncated to enable drastic reductions in computational costs without compromising the model accuracy. Furthermore, we discuss the potential universality of the extracted features and implications for reduced order modeling of other turbulent flows.
Time dependent optimal switching controls in online selling models
Bradonjic, Milan; Cohen, Albert
2010-01-01
We present a method to incorporate dishonesty in online selling via a stochastic optimal control problem. In our framework, the seller wishes to maximize her average wealth level W at a fixed time T of her choosing. The corresponding Hamilton-Jacobi-Bellmann (HJB) equation is analyzed for a basic case. For more general models, the admissible control set is restricted to a jump process that switches between extreme values. We propose a new approach, where the optimal control problem is reduced to a multivariable optimization problem.
Optimal experiment design for model selection in biochemical networks
2014-01-01
Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498
Pumping Optimization Model for Pump and Treat Systems - 15091
Baker, S.; Ivarson, Kristine A.; Karanovic, M.; Miller, Charles W.; Tonkin, M.
2015-01-15
Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It produces predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.
Aeroelastic Optimization Study Based on X-56A Model
NASA Technical Reports Server (NTRS)
Li, Wesley; Pak, Chan-Gi
2014-01-01
A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
Velocity model optimization for surface microseismic monitoring via amplitude stacking
NASA Astrophysics Data System (ADS)
Jiang, Haiyu; Wang, Zhongren; Zeng, Xiaoxian; Lü, Hao; Zhou, Xiaohua; Chen, Zubin
2016-12-01
A usable velocity model in microseismic projects plays a crucial role in achieving statistically reliable microseismic event locations. Existing methods for velocity model optimization rely mainly on picking arrival times at individual receivers. However, for microseismic monitoring with surface stations, seismograms of perforation shots have such low signal-to-noise ratios (S/N) that they do not yield sufficiently reliable picks. In this study, we develop a framework for constructing a 1-D flat-layered a priori velocity model using a non-linear optimization technique based on amplitude stacking. The energy focusing of the perforation shot is improved thanks to very fast simulated annealing (VFSA), and the accuracies of shot relocations are used to evaluate whether the resultant velocity model can be used for microseismic event location. Our method also includes a conventional migration-based location technique that utilizes successive grid subdivisions to improve computational efficiency and source location accuracy. Because unreasonable a priori velocity model information and interference due to additive noise are the major contributors to inaccuracies in perforation shot locations, we use velocity model optimization as a compensation scheme. Using synthetic tests, we show that accurate locations of perforation shots can be recovered to within 2 m, even with pre-stack S/N ratios as low as 0.1 at individual receivers. By applying the technique to a coal-bed gas reservoir in Western China, we demonstrate that perforation shot location can be recovered to within the tolerance of the well tip location.
Gravitational Lens Modeling with Genetic Algorithms and Particle Swarm Optimizers
NASA Astrophysics Data System (ADS)
Rogers, Adam; Fiege, Jason D.
2011-02-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our "matrix-free" approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image χ2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest χ2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
Rogers, Adam; Fiege, Jason D.
2011-02-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration
Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.
ERIC Educational Resources Information Center
Belgard, Maria R.; Min, Leo Yoon-Gee
An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…
Geometry Modeling and Grid Generation for Design and Optimization
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.
An internet graph model based on trade-off optimization
NASA Astrophysics Data System (ADS)
Alvarez-Hamelin, J. I.; Schabanel, N.
2004-03-01
This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.
Verifying and Validating Proposed Models for FSW Process Optimization
NASA Technical Reports Server (NTRS)
Schneider, Judith
2008-01-01
This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms
HIV dynamics: Modeling, data analysis, and optimal treatment protocols
NASA Astrophysics Data System (ADS)
Adams, B. M.; Banks, H. T.; Davidian, M.; Kwon, Hee-Dae; Tran, H. T.; Wynne, S. N.; Rosenberg, E. S.
2005-12-01
We present an overview of some concepts and methodologies we believe useful in modeling HIV pathogenesis. After a brief discussion of motivation for and previous efforts in the development of mathematical models for progression of HIV infection and treatment, we discuss mathematical and statistical ideas relevant to Structured Treatment Interruptions (STI). Among these are model development and validation procedures including parameter estimation, data reduction and representation, and optimal control relative to STI. Results from initial attempts in each of these areas by an interdisciplinary team of applied mathematicians, statisticians and clinicians are presented.
Cost Optimization Model for Business Applications in Virtualized Grid Environments
NASA Astrophysics Data System (ADS)
Strebel, Jörg
The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.
Electrochemical model based charge optimization for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Pramanik, Sourav; Anwar, Sohel
2016-05-01
In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.
Risk-based Multiobjective Optimization Model for Bridge Maintenance Planning
Yang, I-T.; Hsu, Y.-S.
2010-05-21
Determining the optimal maintenance plan is essential for successful bridge management. The optimization objectives are defined in the forms of minimizing life-cycle cost and maximizing performance indicators. Previous bridge maintenance models assumed the process of bridge deterioration and the estimate of maintenance cost are deterministic, i.e., known with certainty. This assumption, however, is invalid especially with estimates over a long time horizon of bridge life. In this study, we consider the risks associated with bridge deterioration and maintenance cost in determining the optimal maintenance plan. The decisions variables include the strategic choice of essential maintenance (such as silane treatment and cathodic protection), and the intervals between periodic maintenance. A epsilon-constrained Particle Swarm Optimization algorithm is used to approximate the tradeoff between life-cycle cost and performance indicators. During stochastic search for optimal solutions, Monte-Carlo simulation is used to evaluate the impact of risks on the objective values, at an acceptance level of reliability. The proposed model can facilitate decision makers to select the compromised maintenance plan with a group of alternative choices, each of which leads to a different level of performance and life-cycle cost. A numerical example is used to illustrate the proposed model.
Effective and efficient algorithm for multiobjective optimization of hydrologic models
NASA Astrophysics Data System (ADS)
Vrugt, Jasper A.; Gupta, Hoshin V.; Bastidas, Luis A.; Bouten, Willem; Sorooshian, Soroosh
2003-08-01
Practical experience with the calibration of hydrologic models suggests that any single-objective function, no matter how carefully chosen, is often inadequate to properly measure all of the characteristics of the observed data deemed to be important. One strategy to circumvent this problem is to define several optimization criteria (objective functions) that measure different (complementary) aspects of the system behavior and to use multicriteria optimization to identify the set of nondominated, efficient, or Pareto optimal solutions. In this paper, we present an efficient and effective Markov Chain Monte Carlo sampler, entitled the Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm, which is capable of solving the multiobjective optimization problem for hydrologic models. MOSCEM is an improvement over the Shuffled Complex Evolution Metropolis (SCEM-UA) global optimization algorithm, using the concept of Pareto dominance (rather than direct single-objective function evaluation) to evolve the initial population of points toward a set of solutions stemming from a stable distribution (Pareto set). The efficacy of the MOSCEM-UA algorithm is compared with the original MOCOM-UA algorithm for three hydrologic modeling case studies of increasing complexity.
Hyperopt: a Python library for model selection and hyperparameter optimization
NASA Astrophysics Data System (ADS)
Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.
2015-01-01
Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.
Metabolic engineering with multi-objective optimization of kinetic models.
Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Balsa-Canto, Eva; Banga, Julio R
2016-03-20
Kinetic models have a great potential for metabolic engineering applications. They can be used for testing which genetic and regulatory modifications can increase the production of metabolites of interest, while simultaneously monitoring other key functions of the host organism. This work presents a methodology for increasing productivity in biotechnological processes exploiting dynamic models. It uses multi-objective dynamic optimization to identify the combination of targets (enzymatic modifications) and the degree of up- or down-regulation that must be performed in order to optimize a set of pre-defined performance metrics subject to process constraints. The capabilities of the approach are demonstrated on a realistic and computationally challenging application: a large-scale metabolic model of Chinese Hamster Ovary cells (CHO), which are used for antibody production in a fed-batch process. The proposed methodology manages to provide a sustained and robust growth in CHO cells, increasing productivity while simultaneously increasing biomass production, product titer, and keeping the concentrations of lactate and ammonia at low values. The approach presented here can be used for optimizing metabolic models by finding the best combination of targets and their optimal level of up/down-regulation. Furthermore, it can accommodate additional trade-offs and constraints with great flexibility. Copyright © 2016 Elsevier B.V. All rights reserved.
To the optimization problem in minority game model
Yanishevsky, Vasyl
2009-12-14
The article presents the research results of the optimization problem in minority game model to a gaussian approximation using replica symmetry breaking by one step (1RSB). A comparison to replica symmetry approximation (RS) and the results from literary sources received using other methods has been held.
To the optimization problem in minority game model
NASA Astrophysics Data System (ADS)
Yanishevsky, Vasyl
2009-12-01
The article presents the research results of the optimization problem in minority game model to a gaussian approximation using replica symmetry breaking by one step (1RSB). A comparison to replica symmetry approximation (RS) and the results from literary sources received using other methods has been held.
Discover for Yourself: An Optimal Control Model in Insect Colonies
ERIC Educational Resources Information Center
Winkel, Brian
2013-01-01
We describe the enlightening path of self-discovery afforded to the teacher of undergraduate mathematics. This is demonstrated as we find and develop background material on an application of optimal control theory to model the evolutionary strategy of an insect colony to produce the maximum number of queen or reproducer insects in the colony at…
Water-resources optimization model for Santa Barbara, California
Nishikawa, T.
1998-01-01
A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.
Optimal Control of a Dengue Epidemic Model with Vaccination
NASA Astrophysics Data System (ADS)
Rodrigues, Helena Sofia; Teresa, M.; Monteiro, T.; Torres, Delfim F. M.
2011-09-01
We present a SIR+ASI epidemic model to describe the interaction between human and dengue fever mosquito populations. A control strategy in the form of vaccination, to decrease the number of infected individuals, is used. An optimal control approach is applied in order to find the best way to fight the disease.
USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation
2016-09-01
of the optimization model provides important input information to the DES and vice versa. A DES only “ replays ” the process in accordance with the... memory permit, and is the approach used in this thesis. 32 3. Concept Example Figure 11 provides a simple example of the overall OMT construct process
Analytical models integrated with satellite images for optimized pest management
USDA-ARS?s Scientific Manuscript database
The global field protection (GFP) was developed to protect and optimize pest management resources integrating satellite images for precise field demarcation with physical models of controlled release devices of pesticides to protect large fields. The GFP was implemented using a graphical user interf...
Optimizing Maintenance: Models with Applications to Marine Industry
1992-09-01
Optimizing Maintenance: No. 8B-I Models with Applications to Marine Industry Dr. Bahadir Inozu, Associate Member, University of New Orleans, and Dr. Nejat...Replacement Under 347, 1977. Capital Rationing Constraints", Tech- nical Report 91-22, Department of In- 23. In6zii, Bahadir , "Reliability and Re- d tal
Discover for Yourself: An Optimal Control Model in Insect Colonies
ERIC Educational Resources Information Center
Winkel, Brian
2013-01-01
We describe the enlightening path of self-discovery afforded to the teacher of undergraduate mathematics. This is demonstrated as we find and develop background material on an application of optimal control theory to model the evolutionary strategy of an insect colony to produce the maximum number of queen or reproducer insects in the colony at…
Optimal tree increment models for the Northeastern United Statesq
Don C. Bragg
2003-01-01
used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...
Optimal Tree Increment Models for the Northeastern United States
Don C. Bragg
2005-01-01
I used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
Giuseppin, M L; van Riel, N A
2000-01-01
A model is presented to describe the observed behavior of microorganisms that aim at metabolic homeostasis while growing and adapting to their environment in an optimal way. The cellular metabolism is seen as a network with a multiple controller system with both feedback and feedforward control, i.e., a model based on a dynamic optimal metabolic control. The dynamic network consists of aggregated pathways, each having a control setpoint for the metabolic states at a given growth rate. This set of strategies of the cell forms a true cybernetic model with a minimal number of assumptions. The cellular strategies and constraints were derived from metabolic flux analysis using an identified, biochemically relevant, stoichiometry matrix derived from experimental data on the cellular composition of continuous cultures of Saccharomyces cerevisiae. Based on these data a cybernetic model was developed to study its dynamic behavior. The growth rate of the cell is determined by the structural compounds and fluxes of compounds related to central metabolism. In contrast to many other cybernetic models, the minimal model does not consist of any assumed internal kinetic parameters or interactions. This necessitates the use of a stepwise integration with an optimization of the fluxes at every time interval. Some examples of the behavior of this model are given with respect to steady states and pulse responses. This model is very suitable for describing semiquantitatively dynamics of global cellular metabolism and may form a useful framework for including structured and more detailed kinetic models.
Modeling of Biological Intelligence for SCM System Optimization
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724
Modeling of biological intelligence for SCM system optimization.
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.
Optimization of Ultrafilter Feed Conditions Using Classical Filtration Models
Geeting, John GH; Hallen, Richard T.; Peterson, Reid A.
2005-11-15
Two classical models were evaluated to assess their applicability to test data obtained from filtration of a High Level Waste Sludge sample from the Hanford tank farms. One model was then selected for use in evaluation of the optimal feed conditions for maximizing filter throughput for the proposed Waste Treatment Plant at the Hanford site. This analysis indicates that an optimal feed composition does exists, but that this optimal composition is different depending upon the product (permeate or retentate) that is to be maximized. A basic premise of the design for the WTP had been that evaporation of the feed to 5 M Na (or higher if possible) was required to achieve optimum throughput. However, these results indicate that optimum throughput from a filtration perspective is achieved at lower sodium molarities (either 3.22 M for maximum LAW throughput or 4.33 M for maximum HLW throughput).
Utility of coupling nonlinear optimization methods with numerical modeling software
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parameter values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).
Optimal control in a model of malaria with differential susceptibility
NASA Astrophysics Data System (ADS)
Hincapié, Doracelly; Ospina, Juan
2014-06-01
A malaria model with differential susceptibility is analyzed using the optimal control technique. In the model the human population is classified as susceptible, infected and recovered. Susceptibility is assumed dependent on genetic, physiological, or social characteristics that vary between individuals. The model is described by a system of differential equations that relate the human and vector populations, so that the infection is transmitted to humans by vectors, and the infection is transmitted to vectors by humans. The model considered is analyzed using the optimal control method when the control consists in using of insecticide-treated nets and educational campaigns; and the optimality criterion is to minimize the number of infected humans, while keeping the cost as low as is possible. One first goal is to determine the effects of differential susceptibility in the proposed control mechanism; and the second goal is to determine the algebraic form of the basic reproductive number of the model. All computations are performed using computer algebra, specifically Maple. It is claimed that the analytical results obtained are important for the design and implementation of control measures for malaria. It is suggested some future investigations such as the application of the method to other vector-borne diseases such as dengue or yellow fever; and also it is suggested the possible application of free software of computer algebra like Maxima.
Multi-objective global optimization for hydrologic models
NASA Astrophysics Data System (ADS)
Yapo, Patrice Ogou; Gupta, Hoshin Vijai; Sorooshian, Soroosh
1998-01-01
The development of automated (computer-based) calibration methods has focused mainly on the selection of a single-objective measure of the distance between the model-simulated output and the data and the selection of an automatic optimization algorithm to search for the parameter values which minimize that distance. However, practical experience with model calibration suggests that no single-objective function is adequate to measure the ways in which the model fails to match the important characteristics of the observed data. Given that some of the latest hydrologic models simulate several of the watershed output fluxes (e.g. water, energy, chemical constituents, etc.), there is a need for effective and efficient multi-objective calibration procedures capable of exploiting all of the useful information about the physical system contained in the measurement data time series. The MOCOM-UA algorithm, an effective and efficient methodology for solving the multiple-objective global optimization problem, is presented in this paper. The method is an extension of the successful SCE-UA single-objective global optimization algorithm. The features and capabilities of MOCOM-UA are illustrated by means of a simple hydrologic model calibration study.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-12
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
Aeroelastic Optimization Study Based on the X-56A Model
NASA Technical Reports Server (NTRS)
Li, Wesley W.; Pak, Chan-Gi
2014-01-01
One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.
Simulation/optimization modeling for robust pumping strategy design.
Kalwij, Ineke M; Peralta, Richard C
2006-01-01
A new simulation/optimization modeling approach is presented for addressing uncertain knowledge of aquifer parameters. The Robustness Enhancing Optimizer (REO) couples genetic algorithm and tabu search as optimizers and incorporates aquifer parameter sensitivity analysis to guide multiple-realization optimization. The REO maximizes strategy robustness for a pumping strategy that is optimal for a primary objective function (OF), such as cost. The more robust a strategy, the more likely it is to achieve management goals in the field, even if the physical system differs from the model. The REO is applied to trinitrotoluene and Royal Demolition Explosive plumes at Umatilla Chemical Depot in Oregon to develop robust least cost strategies. The REO efficiently develops robust pumping strategies while maintaining the optimal value of the primary OF-differing from the common situation in which a primary OF value degrades as strategy reliability increases. The REO is especially valuable where data to develop realistic probability density functions (PDFs) or statistically derived realizations are unavailable. Because they require much less field data, REO-developed strategies might not achieve as high a mathematical reliability as strategies developed using many realizations based upon real aquifer parameter PDFs. REO-developed strategies might or might not yield a better OF value in the field.
A neural network model of reliably optimized spike transmission.
Samura, Toshikazu; Ikegaya, Yuji; Sato, Yasuomi D
2015-06-01
We studied the detailed structure of a neuronal network model in which the spontaneous spike activity is correctly optimized to match the experimental data and discuss the reliability of the optimized spike transmission. Two stochastic properties of the spontaneous activity were calculated: the spike-count rate and synchrony size. The synchrony size, expected to be an important factor for optimization of spike transmission in the network, represents a percentage of observed coactive neurons within a time bin, whose probability approximately follows a power-law. We systematically investigated how these stochastic properties could matched to those calculated from the experimental data in terms of the log-normally distributed synaptic weights between excitatory and inhibitory neurons and synaptic background activity induced by the input current noise in the network model. To ensure reliably optimized spike transmission, the synchrony size as well as spike-count rate were simultaneously optimized. This required changeably balanced log-normal distributions of synaptic weights between excitatory and inhibitory neurons and appropriately amplified synaptic background activity. Our results suggested that the inhibitory neurons with a hub-like structure driven by intensive feedback from excitatory neurons were a key factor in the simultaneous optimization of the spike-count rate and synchrony size, regardless of different spiking types between excitatory and inhibitory neurons.
A simple model of optimal population coding for sensory systems.
Doi, Eizaburo; Lewicki, Michael S
2014-08-01
A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.
Health benefit modelling and optimization of vehicular pollution control strategies
NASA Astrophysics Data System (ADS)
Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra
2012-12-01
This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific
A model for HIV/AIDS pandemic with optimal control
NASA Astrophysics Data System (ADS)
Sule, Amiru; Abdullah, Farah Aini
2015-05-01
Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.
Multiview coding mode decision with hybrid optimal stopping model.
Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay
2013-04-01
In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.
Highly optimized tolerance in epidemic models incorporating local optimization and regrowth.
Robert, C; Carlson, J M; Doyle, J
2001-05-01
In the context of a coupled map model of population dynamics, which includes the rapid spread of fatal epidemics, we investigate the consequences of two new features in highly optimized tolerance (HOT), a mechanism which describes how complexity arises in systems which are optimized for robust performance in the presence of a harsh external environment. Specifically, we (1) contrast global and local optimization criteria and (2) investigate the effects of time dependent regrowth. We find that both local and global optimization lead to HOT states, which may differ in their specific layouts, but share many qualitative features. Time dependent regrowth leads to HOT states which deviate from the optimal configurations in the corresponding static models in order to protect the system from slow (or impossible) regrowth which follows the largest losses and extinctions. While the associated map can exhibit complex, chaotic solutions, HOT states are confined to relatively simple dynamical regimes.
NASA Astrophysics Data System (ADS)
Pham, H. V.; Tsai, F. T. C.
2014-12-01
Groundwater systems are complex and subject to multiple interpretations and conceptualizations due to a lack of sufficient information. As a result, multiple conceptual models are often developed and their mean predictions are preferably used to avoid biased predictions from using a single conceptual model. Yet considering too many conceptual models may lead to high prediction uncertainty and may lose the purpose of model development. In order to reduce the number of models, an optimal observation network design is proposed based on maximizing the Kullback-Leibler (KL) information to discriminate competing models. The KL discrimination function derived by Box and Hill [1967] for one additional observation datum at a time is expanded to account for multiple independent spatiotemporal observations. The Bayesian model averaging (BMA) method is used to incorporate existing data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. To consider the future observation uncertainty, the Monte Carlo realizations of BMA predicted future observations are used to calculate the mean and variance of posterior model probabilities of the competing models. The goal of the optimal observation network design is to find the number and location of observation wells and sampling rounds such that the highest posterior model probability of a model is larger than a desired probability criterion (e.g., 95%). The optimal observation network design is implemented to a groundwater study in the Baton Rouge area, Louisiana to collect new groundwater heads from USGS wells. The considered sources of uncertainty that create multiple groundwater models are the geological architecture, the boundary condition, and the fault permeability architecture. All possible design solutions are enumerated using high performance computing systems. Results show that total model variance (the sum of within-model variance and between-model
Parameter optimization in differential geometry based solvation models
Wang, Bao; Wei, G. W.
2015-01-01
Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304
Linear versus quadratic portfolio optimization model with transaction cost
NASA Astrophysics Data System (ADS)
Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah
2014-06-01
Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.
Optimal model-free prediction from multivariate time series.
Runge, Jakob; Donner, Reik V; Kurths, Jürgen
2015-05-01
Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.
Optimal model-free prediction from multivariate time series
NASA Astrophysics Data System (ADS)
Runge, Jakob; Donner, Reik V.; Kurths, Jürgen
2015-05-01
Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.
Optimization of nonlinear quarter car suspension-seat-driver model.
Nagarkar, Mahesh P; Vikhe Patil, Gahininath J; Zaware Patil, Rahul N
2016-11-01
In this paper a nonlinear quarter car suspension-seat-driver model was implemented for optimum design. A nonlinear quarter car model comprising of quadratic tyre stiffness and cubic stiffness in suspension spring, frame, and seat cushion with 4 degrees of freedom (DoF) driver model was presented for optimization and analysis. Suspension system was aimed to optimize the comfort and health criterion comprising of Vibration Dose Value (VDV) at head, frequency weighted RMS head acceleration, crest factor, amplitude ratio of head RMS acceleration to seat RMS acceleration and amplitude ratio of upper torso RMS acceleration to seat RMS acceleration along with stability criterion comprising of suspension space deflection and dynamic tyre force. ISO 2631-1 standard was adopted to assess ride and health criterions. Suspension spring stiffness and damping and seat cushion stiffness and damping are the design variables. Non-dominated Sort Genetic Algorithm (NSGA-II) and Multi-Objective Particle Swarm Optimization - Crowding Distance (MOPSO-CD) algorithm are implemented for optimization. Simulation result shows that optimum design improves ride comfort and health criterion over classical design variables.
Boundary condition optimal control problem in lava flow modelling
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, Alik; Korotkii, Alexander; Tsepelev, Igor; Kovtunov, Dmitry; Melnik, Oleg
2016-04-01
We study a problem of steady-state fluid flow with known thermal conditions (e.g., measured temperature and the heat flux at the surface of lava flow) at one segment of the model boundary and unknown conditions at its another segment. This problem belongs to a class of boundary condition optimal control problems and can be solved by data assimilation from one boundary to another using direct and adjoint models. We derive analytically the adjoint model and test the cost function and its gradient, which minimize the misfit between the known thermal condition and its model counterpart. Using optimization algorithms, we iterate between the direct and adjoint problems and determine the missing boundary condition as well as thermal and dynamic characteristics of the fluid flow. The efficiency of optimization algorithms - Polak-Ribiere conjugate gradient and the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithms - have been tested with the aim to get a rapid convergence to the solution of this inverse ill-posed problem. Numerical results show that temperature and velocity can be determined with a high accuracy in the case of smooth input data. A noise imposed on the input data results in a less accurate solution, but still acceptable below some noise level.
A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment
NASA Astrophysics Data System (ADS)
Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir
2015-07-01
This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Collision-free nonuniform dynamics within continuous optimal velocity models
NASA Astrophysics Data System (ADS)
Tordeux, Antoine; Seyfried, Armin
2014-10-01
Optimal velocity (OV) car-following models give with few parameters stable stop-and -go waves propagating like in empirical data. Unfortunately, classical OV models locally oscillate with vehicles colliding and moving backward. In order to solve this problem, the models have to be completed with additional parameters. This leads to an increase of the complexity. In this paper, a new OV model with no additional parameters is defined. For any value of the inputs, the model is intrinsically asymmetric and collision-free. This is achieved by using a first-order ordinary model with two predecessors in interaction, instead of the usual inertial delayed first-order or second-order models coupled with the predecessor. The model has stable uniform solutions as well as various stable stop-and -go patterns with bimodal distribution of the speed. As observable in real data, the modal speed values in congested states are not restricted to the free flow speed and zero. They depend on the form of the OV function. Properties of linear, concave, convex, or sigmoid speed functions are explored with no limitation due to collisions.
Optimal inference with suboptimal models: Addiction and active Bayesian inference
Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl
2015-01-01
When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
[Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].
Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang
2011-12-01
To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.
Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-10-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more
Advanced Nuclear Fuel Cycle Transitions: Optimization, Modeling Choices, and Disruptions
NASA Astrophysics Data System (ADS)
Carlsen, Robert W.
Many nuclear fuel cycle simulators have evolved over time to help understan the nuclear industry/ecosystem at a macroscopic level. Cyclus is one of th first fuel cycle simulators to accommodate larger-scale analysis with it liberal open-source licensing and first-class Linux support. Cyclus also ha features that uniquely enable investigating the effects of modeling choices o fuel cycle simulators and scenarios. This work is divided into thre experiments focusing on optimization, effects of modeling choices, and fue cycle uncertainty. Effective optimization techniques are developed for automatically determinin desirable facility deployment schedules with Cyclus. A novel method fo mapping optimization variables to deployment schedules is developed. Thi allows relationships between reactor types and scenario constraints to b represented implicitly in the variable definitions enabling the usage o optimizers lacking constraint support. It also prevents wasting computationa resources evaluating infeasible deployment schedules. Deployed power capacit over time and deployment of non-reactor facilities are also included a optimization variables There are many fuel cycle simulators built with different combinations o modeling choices. Comparing results between them is often difficult. Cyclus flexibility allows comparing effects of many such modeling choices. Reacto refueling cycle synchronization and inter-facility competition among othe effects are compared in four cases each using combinations of fleet of individually modeled reactors with 1-month or 3-month time steps. There are noticeable differences in results for the different cases. The larges differences occur during periods of constrained reactor fuel availability This and similar work can help improve the quality of fuel cycle analysi generally There is significant uncertainty associated deploying new nuclear technologie such as time-frames for technology availability and the cost of buildin advanced reactors
Stochastic optimal velocity model and its long-lived metastability.
Kanai, Masahiro; Nishinari, Katsuhiro; Tokihiro, Tetsuji
2005-09-01
In this paper, we propose a stochastic cellular automaton model of traffic flow extending two exactly solvable stochastic models, i.e., the asymmetric simple exclusion process and the zero range process. Moreover, it is regarded as a stochastic extension of the optimal velocity model. In the fundamental diagram (flux-density diagram), our model exhibits several regions of density where more than one stable state coexists at the same density in spite of the stochastic nature of its dynamical rule. Moreover, we observe that two long-lived metastable states appear for a transitional period, and that the dynamical phase transition from a metastable state to another metastable/stable state occurs sharply and spontaneously.
CPOPT : optimization for fitting CANDECOMP/PARAFAC models.
Dunlavy, Daniel M.; Kolda, Tamara Gibson; Acar, Evrim
2008-10-01
Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.
A mathematical model on the optimal timing of offspring desertion.
Seno, Hiromi; Endo, Hiromi
2007-06-07
We consider the offspring desertion as the optimal strategy for the deserter parent, analyzing a mathematical model for its expected reproductive success. It is shown that the optimality of the offspring desertion significantly depends on the offsprings' birth timing in the mating season, and on the other ecological parameters characterizing the innate nature of considered animals. Especially, the desertion is less likely to occur for the offsprings born in the later period of mating season. It is also implied that the offspring desertion after a partially biparental care would be observable only with a specific condition.
Replica Analysis for Portfolio Optimization with Single-Factor Model
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2017-06-01
In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.
Dynamic stochastic optimization models for air traffic flow management
NASA Astrophysics Data System (ADS)
Mukherjee, Avijit
This dissertation presents dynamic stochastic optimization models for Air Traffic Flow Management (ATFM) that enables decisions to adapt to new information on evolving capacities of National Airspace System (NAS) resources. Uncertainty is represented by a set of capacity scenarios, each depicting a particular time-varying capacity profile of NAS resources. We use the concept of a scenario tree in which multiple scenarios are possible initially. Scenarios are eliminated as possibilities in a succession of branching points, until the specific scenario that will be realized on a particular day is known. Thus the scenario tree branching provides updated information on evolving scenarios, and allows ATFM decisions to be re-addressed and revised. First, we propose a dynamic stochastic model for a single airport ground holding problem (SAGHP) that can be used for planning Ground Delay Programs (GDPs) when there is uncertainty about future airport arrival capacities. Ground delays of non-departed flights can be revised based on updated information from scenario tree branching. The problem is formulated so that a wide range of objective functions, including non-linear delay cost functions and functions that reflect equity concerns can be optimized. Furthermore, the model improves on existing practice by ensuring efficient use of available capacity without necessarily exempting long-haul flights. Following this, we present a methodology and optimization models that can be used for decentralized decision making by individual airlines in the GDP planning process, using the solutions from the stochastic dynamic SAGHP. Airlines are allowed to perform cancellations, and re-allocate slots to remaining flights by substitutions. We also present an optimization model that can be used by the FAA, after the airlines perform cancellation and substitutions, to re-utilize vacant arrival slots that are created due to cancellations. Finally, we present three stochastic integer programming
Web malware spread modelling and optimal control strategies.
Liu, Wanping; Zhong, Shouming
2017-02-10
The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.
Web malware spread modelling and optimal control strategies
NASA Astrophysics Data System (ADS)
Liu, Wanping; Zhong, Shouming
2017-02-01
The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.
Web malware spread modelling and optimal control strategies
Liu, Wanping; Zhong, Shouming
2017-01-01
The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice. PMID:28186203
Optimization of wind farm performance using low-order models
NASA Astrophysics Data System (ADS)
Dabiri, John; Brownstein, Ian
2015-11-01
A low order model that captures the dominant flow behaviors in a vertical-axis wind turbine (VAWT) array is used to maximize the power output of wind farms utilizing VAWTs. The leaky Rankine body model (LRB) was shown by Araya et al. (JRSE 2014) to predict the ranking of individual turbine performances in an array to within measurement uncertainty as compared to field data collected from full-scale VAWTs. Further, this model is able to predict array performance with significantly less computational expense than higher fidelity numerical simulations of the flow, making it ideal for use in optimization of wind farm performance. This presentation will explore the ability of the LRB model to rank the relative power output of different wind turbine array configurations as well as the ranking of individual array performance over a variety of wind directions, using various complex configurations tested in the field and simpler configurations tested in a wind tunnel. Results will be presented in which the model is used to determine array fitness in an evolutionary algorithm seeking to find optimal array configurations given a number of turbines, area of available land, and site wind direction profile. Comparison with field measurements will be presented.
Discrete-Time ARMAv Model-Based Optimal Sensor Placement
Song Wei; Dyke, Shirley J.
2008-07-08
This paper concentrates on the optimal sensor placement problem in ambient vibration based structural health monitoring. More specifically, the paper examines the covariance of estimated parameters during system identification using auto-regressive and moving average vector (ARMAv) model. By utilizing the discrete-time steady state Kalman filter, this paper realizes the structure's finite element (FE) model under broad-band white noise excitations using an ARMAv model. Based on the asymptotic distribution of the parameter estimates of the ARMAv model, both a theoretical closed form and a numerical estimate form of the covariance of the estimates are obtained. Introducing the information entropy (differential entropy) measure, as well as various matrix norms, this paper attempts to find a reasonable measure to the uncertainties embedded in the ARMAv model estimates. Thus, it is possible to select the optimal sensor placement that would lead to the smallest uncertainties during the ARMAv identification process. Two numerical examples are provided to demonstrate the methodology and compare the sensor placement results upon various measures.
Roll levelling semi-analytical model for process optimization
NASA Astrophysics Data System (ADS)
Silvestre, E.; Garcia, D.; Galdos, L.; Saenz de Argandoña, E.; Mendiguren, J.
2016-08-01
Roll levelling is a primary manufacturing process used to remove residual stresses and imperfections of metal strips in order to make them suitable for subsequent forming operations. In the last years the importance of this process has been evidenced with the apparition of Ultra High Strength Steels with strength > 900 MPa. The optimal setting of the machine as well as a robust machine design has become critical for the correct processing of these materials. Finite Element Method (FEM) analysis is the widely used technique for both aspects. However, in this case, the FEM simulation times are above the admissible ones in both machine development and process optimization. In the present work, a semi-analytical model based on a discrete bending theory is presented. This model is able to calculate the critical levelling parameters i.e. force, plastification rate, residual stresses in a few seconds. First the semi-analytical model is presented. Next, some experimental industrial cases are analyzed by both the semi-analytical model and the conventional FEM model. Finally, results and computation times of both methods are compared.
High Resolution Beam Modeling and Optimization with IMPACT
NASA Astrophysics Data System (ADS)
Qiang, Ji
2017-01-01
The LCLS-II, a new BES x-ray FEL facility at SLAC, is being designed using the IMPACT simulation code which includes a full model for the electron beam transport with 3-D space charge effects as well as IntraBeam Scattering and Coherent Synchrotron Radiation. A 22 parameter optimization is being used to find injector and linac configurations that achieve the design specifications. The detailed physics models in IMPACT are being benchmarked against experiments at LCLS. This work was done in collaboration with SLAC LCLS-II design team and supported by the DOE under contract No. DE-AC02-05CH11231.
Rapid Modeling, Assembly and Simulation in Design Optimization
NASA Technical Reports Server (NTRS)
Housner, Jerry
1997-01-01
A new capability for design is reviewed. This capability provides for rapid assembly of detail finite element models early in the design process where costs are most effectively impacted. This creates an engineering environment which enables comprehensive analysis and design optimization early in the design process. Graphical interactive computing makes it possible for the engineer to interact with the design while performing comprehensive design studies. This rapid assembly capability is enabled by the use of Interface Technology, to couple independently created models which can be archived and made accessible to the designer. Results are presented to demonstrate the capability.
A Smoothed Eclipse Model for Solar Electric Propulsion Trajectory Optimization
NASA Technical Reports Server (NTRS)
Aziz, Jonathan D.; Scheeres, Daniel J.; Parker, Jeffrey S.; Englander, Jacob A.
2017-01-01
Solar electric propulsion (SEP) is the dominant design option for employing low-thrust propulsion on a space mission. Spacecraft solar arrays power the SEP system but are subject to blackout periods during solar eclipse conditions. Discontinuity in power available to the spacecraft must be accounted for in trajectory optimization, but gradient-based methods require a differentiable power model. This work presents a power model that smooths the eclipse transition from total eclipse to total sunlight with a logistic function. Example trajectories are computed with differential dynamic programming, a second-order gradient-based method.
Optimal model-free prediction from multivariate time series
NASA Astrophysics Data System (ADS)
Runge, Jakob; Donner, Reik V.; Kurths, Jürgen
2015-04-01
Forecasting a complex system's time evolution constitutes a challenging problem, especially if the governing physical equations are unknown or too complex to be simulated with first-principle models. Here a model-free prediction scheme based on the observed multivariate time series is discussed. It efficiently overcomes the curse of dimensionality in finding good predictors from large data sets and yields information-theoretically optimal predictors. The practical performance of the prediction scheme is demonstrated on multivariate nonlinear stochastic delay processes and in an application to an index of El Nino-Southern Oscillation.
Optimizing the lithography model calibration algorithms for NTD process
NASA Astrophysics Data System (ADS)
Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.
2016-03-01
As patterns shrink to the resolution limits of up-to-date ArF immersion lithography technology, negative tone development (NTD) process has been an increasingly adopted technique to get superior imaging quality through employing bright-field (BF) masks to print the critical dark-field (DF) metal and contact layers. However, from the fundamental materials and process interaction perspectives, several key differences inherently exist between NTD process and the traditional positive tone development (PTD) system, especially the horizontal/vertical resist shrinkage and developer depletion effects, hence the traditional resist parameters developed for the typical PTD process have no longer fit well in NTD process modeling. In order to cope with the inherent differences between PTD and NTD processes accordingly get improvement on NTD modeling accuracy, several NTD models with different combinations of complementary terms were built to account for the NTD-specific resist shrinkage, developer depletion and diffusion, and wafer CD jump induced by sub threshold assistance feature (SRAF) effects. Each new complementary NTD term has its definite aim to deal with the NTD-specific phenomena. In this study, the modeling accuracy is compared among different models for the specific patterning characteristics on various feature types. Multiple complementary NTD terms were finally proposed to address all the NTD-specific behaviors simultaneously and further optimize the NTD modeling accuracy. The new algorithm of multiple complementary NTD term tested on our critical dark-field layers demonstrates consistent model accuracy improvement for both calibration and verification.
Robust model predictive control for optimal continuous drug administration.
Sopasakis, Pantelis; Patrinos, Panagiotis; Sarimveis, Haralambos
2014-10-01
In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Bader, Jon B.
2009-01-01
Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.
Optimized GPU simulation of continuous-spin glass models
NASA Astrophysics Data System (ADS)
Yavors'kii, T.; Weigel, M.
2012-08-01
We develop a highly optimized code for simulating the Edwards-Anderson Heisenberg model on graphics processing units (GPUs). Using a number of computational tricks such as tiling, data compression and appropriate memory layouts, the simulation code combining over-relaxation, heat bath and parallel tempering moves achieves a peak performance of 0.29 ns per spin update on realistic system sizes, corresponding to a more than 150 fold speed-up over a serial CPU reference implementation. The optimized implementation is used to study the spin-glass transition in a random external magnetic field to probe the existence of a de Almeida-Thouless line in the model, for which we give benchmark results.
Optimal symmetric flight with an intermediate vehicle model
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.
1983-01-01
Optimal flight in the vertical plane with a vehicle model intermediate in complexity between the point-mass and energy models is studied. Flight-path angle takes on the role of a control variable. Range-open problems feature subarcs of vertical flight and singular subarcs. The class of altitude-speed-range-time optimization problems with fuel expenditure unspecified is investigated and some interesting phenomena uncovered. The maximum-lift-to-drag glide appears as part of the family, final-time-open, with appropriate initial and terminal transient exceeding level-flight drag, some members exhibiting oscillations. Oscillatory paths generally fail the Jacobi test for durations exceeding a period and furnish a minimum only for short-duration problems.
Optimal Culling and Biocontrol in a Predator-Prey Model.
Numfor, Eric; Hilker, Frank M; Lenhart, Suzanne
2017-01-01
Invasive species cause enormous problems in ecosystems around the world. Motivated by introduced feral cats that prey on bird populations and threaten to drive them extinct on remote oceanic islands, we formulate and analyze optimal control problems. Their novelty is that they involve both scalar and time-dependent controls. They represent different forms of control, namely the initial release of infected predators on the one hand and culling as well as trapping, infecting, and returning predators on the other hand. Combinations of different control methods have been proposed to complement their respective strengths in reducing predator numbers and thus protecting endangered prey. Here, we formulate and analyze an eco-epidemiological model, provide analytical results on the optimal control problem, and use a forward-backward sweep method for numerical simulations. By taking into account different ecological scenarios, initial conditions, and control durations, our model allows to gain insight how the different methods interact and in which cases they could be effective.
Designing optimal number of receiving traces based on simulation model
NASA Astrophysics Data System (ADS)
Zhao, Hu; Wu, Si-Hai; Yang, Jing; Ren, Da; Xu, Wei-Xiu; Liu, Di-Ou; Zhu, Peng-Yu
2017-03-01
Currently, the selection of receiving traces in geometry design is mostly based on the horizontal layered medium hypothesis, which is unable to meet survey requirements in a complex area. This paper estimates the optimal number of receiving traces in field geometry using a numerical simulation based on a field test conducted in previous research (Zhu et al., 2011). A mathematical model is established for total energy and average efficiency energy using fixed trace spacing and optimal receiving traces are estimated. Seismic data acquired in a complex work area are used to verify the correctness of the proposed method. Results of model data calculations and actual data processing show that results are in agreement. This indicates that the proposed method is reasonable, correct, sufficiently scientific, and can be regarded as a novel method for use in seismic geometry design in complex geological regions.
U.S. Army Delayed Entry Program Optimization Model
2004-08-01
changing policy. Chapter 5 addresses the issue of optimizing the EDEP to include: objectives and metrics for a model , alternative solution methods, and...personnel surplus to flow into the training bases. Accessions and Recruiting Command extensively use the DEP for smoothing the seasonal recruiting...changes or other unpredictable to meet school requirements events (ex. Sept. 11) 4 Equity problem related to differences 5. Relief from direct
Modeling Microinverters and DC Power Optimizers in PVWatts
MacAlpine, S.; Deline, C.
2015-02-01
Module-level distributed power electronics including microinverters and DC power optimizers are increasingly popular in residential and commercial PV systems. Consumers are realizing their potential to increase design flexibility, monitor system performance, and improve energy capture. It is becoming increasingly important to accurately model PV systems employing these devices. This document summarizes existing published documents to provide uniform, impartial recommendations for how the performance of distributed power electronics can be reflected in NREL's PVWatts calculator (http://pvwatts.nrel.gov/).
Optimization of sampled imaging system with baseband response squeeze model
NASA Astrophysics Data System (ADS)
Yang, Huaidong; Chen, Kexin; Huang, Xingyue; He, Qingsheng; Jin, Guofan
2008-03-01
When evaluating or designing a sampled imager, a comprehensive analysis is necessary and a trade-off among optics, photoelectric detector and display technique is inevitable. A new method for sampled imaging system evaluation and optimization is developed in this paper. By extension of MTF in sampled imaging system, inseparable parameters of a detector are taken into account and relations among optics, detector and display are revealed. To measure the artifacts of sampling, the Baseband Response Squeeze model, which will impose a penalty for undersampling, is clarified. Taken the squeezed baseband response and its cutoff frequency for favorable criterion, the method is competent not only for evaluating but also for optimizing sampled imaging system oriented either to single task or to multi-task. The method is applied to optimize a typical sampled imaging system. a sensitivity analysis of various detector parameters is performed and the resulted guidelines are given.
Mathematical model of the metal mould surface temperature optimization
Mlynek, Jaroslav Knobloch, Roman; Srb, Radek
2015-11-30
The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.
Optimization model of vaccination strategy for dengue transmission
NASA Astrophysics Data System (ADS)
Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.
2014-02-01
Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.
Mathematical model of the metal mould surface temperature optimization
NASA Astrophysics Data System (ADS)
Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek
2015-11-01
The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.
Optimization in generalized linear models: A case study
NASA Astrophysics Data System (ADS)
Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina
2016-06-01
The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.
NASA Astrophysics Data System (ADS)
WöHling, Thomas; Vrugt, Jasper A.
2008-12-01
Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multiobjective optimization and Bayesian model averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multiobjective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM and used to generate four different model ensembles. These ensembles are postprocessed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multiobjective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.
A model of optimal dosing of antibiotic treatment in biofilm.
Imran, Mudassar; Smith, Hal L
2014-06-01
Biofilms are heterogeneous matrix enclosed micro-colonies of bacteria mostly found on moist surfaces. Biofilm formation is the primary cause of several persistent infections found in humans. We derive a mathematical model of biofilm and surrounding fluid dynamics to investigate the effect of a periodic dose of antibiotic on elimination of microbial population from biofilm. The growth rate of bacteria in biofilm is taken as Monod type for the limiting nutrient. The pharmacodynamics function is taken to be dependent both on limiting nutrient and antibiotic concentration. Assuming that flow rate of fluid compartment is large enough, we reduce the six dimensional model to a three dimensional model. Mathematically rigorous results are derived providing sufficient conditions for treatment success. Persistence theory is used to derive conditions under which the periodic solution for treatment failure is obtained. We also discuss the phenomenon of bi-stability where both infection-free state and infection state are locally stable when antibiotic dosing is marginal. In addition, we derive the optimal antibiotic application protocols for different scenarios using control theory and show that such treatments ensure bacteria elimination for a wide variety of cases. The results show that bacteria are successfully eliminated if the discrete treatment is given at an early stage in the infection or if the optimal protocol is adopted. Finally, we examine factors which if changed can result in treatment success of the previously treatment failure cases for the non-optimal technique.
Influence of model errors in optimal sensor placement
NASA Astrophysics Data System (ADS)
Vincenzi, Loris; Simonini, Laura
2017-02-01
The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.
Optimizing multi-pinhole SPECT geometries using an analytical model
NASA Astrophysics Data System (ADS)
Rentmeester, M. C. M.; van der Have, F.; Beekman, F. J.
2007-05-01
State-of-the-art multi-pinhole SPECT devices allow for sub-mm resolution imaging of radio-molecule distributions in small laboratory animals. The optimization of multi-pinhole and detector geometries using simulations based on ray-tracing or Monte Carlo algorithms is time-consuming, particularly because many system parameters need to be varied. As an efficient alternative we develop a continuous analytical model of a pinhole SPECT system with a stationary detector set-up, which we apply to focused imaging of a mouse. The model assumes that the multi-pinhole collimator and the detector both have the shape of a spherical layer, and uses analytical expressions for effective pinhole diameters, sensitivity and spatial resolution. For fixed fields-of-view, a pinhole-diameter adapting feedback loop allows for the comparison of the system resolution of different systems at equal system sensitivity, and vice versa. The model predicts that (i) for optimal resolution or sensitivity the collimator layer with pinholes should be placed as closely as possible around the animal given a fixed detector layer, (ii) with high-resolution detectors a resolution improvement up to 31% can be achieved compared to optimized systems, (iii) high-resolution detectors can be placed close to the collimator without significant resolution losses, (iv) interestingly, systems with a physical pinhole diameter of 0 mm can have an excellent resolution when high-resolution detectors are used.
Model-based optimization of tapered free-electron lasers
NASA Astrophysics Data System (ADS)
Mak, Alan; Curbis, Francesca; Werin, Sverker
2015-04-01
The energy extraction efficiency is a figure of merit for a free-electron laser (FEL). It can be enhanced by the technique of undulator tapering, which enables the sustained growth of radiation power beyond the initial saturation point. In the development of a single-pass x-ray FEL, it is important to exploit the full potential of this technique and optimize the taper profile aw(z ). Our approach to the optimization is based on the theoretical model by Kroll, Morton, and Rosenbluth, whereby the taper profile aw(z ) is not a predetermined function (such as linear or exponential) but is determined by the physics of a resonant particle. For further enhancement of the energy extraction efficiency, we propose a modification to the model, which involves manipulations of the resonant particle's phase. Using the numerical simulation code GENESIS, we apply our model-based optimization methods to a case of the future FEL at the MAX IV Laboratory (Lund, Sweden), as well as a case of the LCLS-II facility (Stanford, USA).
Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors
Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine
2007-06-01
Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Astrophysics Data System (ADS)
Pina, R. K.; Puetter, R. C.
1993-06-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
An optimization model for the US Air-Traffic System
NASA Technical Reports Server (NTRS)
Mulvey, J. M.
1986-01-01
A systematic approach for monitoring U.S. air traffic was developed in the context of system-wide planning and control. Towards this end, a network optimization model with nonlinear objectives was chosen as the central element in the planning/control system. The network representation was selected because: (1) it provides a comprehensive structure for depicting essential aspects of the air traffic system, (2) it can be solved efficiently for large scale problems, and (3) the design can be easily communicated to non-technical users through computer graphics. Briefly, the network planning models consider the flow of traffic through a graph as the basic structure. Nodes depict locations and time periods for either individual planes or for aggregated groups of airplanes. Arcs define variables as actual airplanes flying through space or as delays across time periods. As such, a special case of the network can be used to model the so called flow control problem. Due to the large number of interacting variables and the difficulty in subdividing the problem into relatively independent subproblems, an integrated model was designed which will depict the entire high level (above 29000 feet) jet route system for the 48 contiguous states in the U.S. As a first step in demonstrating the concept's feasibility a nonlinear risk/cost model was developed for the Indianapolis Airspace. The nonlinear network program --NLPNETG-- was employed in solving the resulting test cases. This optimization program uses the Truncated-Newton method (quadratic approximation) for determining the search direction at each iteration in the nonlinear algorithm. It was shown that aircraft could be re-routed in an optimal fashion whenever traffic congestion increased beyond an acceptable level, as measured by the nonlinear risk function.
Leardini, Alberto; Belvedere, Claudio; Nardini, Fabrizio; Sancisi, Nicola; Conconi, Michele; Parenti-Castelli, Vincenzo
2017-05-22
Kinematic models of lower limb joints have several potential applications in musculoskeletal modelling of the locomotion apparatus, including the reproduction of the natural joint motion. These models have recently revealed their value also for in vivo motion analysis experiments, where the soft-tissue artefact is a critical known problem. This arises at the interface between the skin markers and the underlying bone, and can be reduced by defining multibody kinematic models of the lower limb and by running optimization processes aimed at obtaining estimates of position and orientation of relevant bones. With respect to standard methods based on the separate optimization of each single body segment, this technique makes it also possible to respect joint kinematic constraints. Whereas the hip joint is traditionally assumed as a 3 degrees of freedom ball and socket articulation, many previous studies have proposed a number of different kinematic models for the knee and ankle joints. Some of these are rigid, while others have compliant elements. Some models have clear anatomical correspondences and include real joint constraints; other models are more kinematically oriented, these being mainly aimed at reproducing joint kinematics. This paper provides a critical review of the kinematic models reported in literature for the major lower limb joints and used for the reduction of soft-tissue artefact. Advantages and disadvantages of these models are discussed, considering their anatomical significance, accuracy of predictions, computational costs, feasibility of personalization, and other features. Their use in the optimization process is also addressed, both in normal and pathological subjects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal Treatment Strategy for a Tumor Model under Immune Suppression
Kim, Kwang Su; Cho, Giphil; Jung, Il Hyo
2014-01-01
We propose a mathematical model describing tumor-immune interactions under immune suppression. These days evidences indicate that the immune suppression related to cancer contributes to its progression. The mathematical model for tumor-immune interactions would provide a new methodology for more sophisticated treatment options of cancer. To do this we have developed a system of 11 ordinary differential equations including the movement, interaction, and activation of NK cells, CD8+T-cells, CD4+T cells, regulatory T cells, and dendritic cells under the presence of tumor and cytokines and the immune interactions. In addition, we apply two control therapies, immunotherapy and chemotherapy to the model in order to control growth of tumor. Using optimal control theory and numerical simulations, we obtain appropriate treatment strategies according to the ratio of the cost for two therapies, which suggest an optimal timing of each administration for the two types of models, without and with immunosuppressive effects. These results mean that the immune suppression can have an influence on treatment strategies for cancer. PMID:25140193
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Dendritic Immunotherapy Improvement for an Optimal Control Murine Model
Chimal-Eguía, J. C.; Castillo-Montiel, E.
2017-01-01
Therapeutic protocols in immunotherapy are usually proposed following the intuition and experience of the therapist. In order to deduce such protocols mathematical modeling, optimal control and simulations are used instead of the therapist's experience. Clinical efficacy of dendritic cell (DC) vaccines to cancer treatment is still unclear, since dendritic cells face several obstacles in the host environment, such as immunosuppression and poor transference to the lymph nodes reducing the vaccine effect. In view of that, we have created a mathematical murine model to measure the effects of dendritic cell injections admitting such obstacles. In addition, the model considers a therapy given by bolus injections of small duration as opposed to a continual dose. Doses timing defines the therapeutic protocols, which in turn are improved to minimize the tumor mass by an optimal control algorithm. We intend to supplement therapist's experience and intuition in the protocol's implementation. Experimental results made on mice infected with melanoma with and without therapy agree with the model. It is shown that the dendritic cells' percentage that manages to reach the lymph nodes has a crucial impact on the therapy outcome. This suggests that efforts in finding better methods to deliver DC vaccines should be pursued. PMID:28912828
Dendritic Immunotherapy Improvement for an Optimal Control Murine Model.
Rangel-Reyes, J C; Chimal-Eguía, J C; Castillo-Montiel, E
2017-01-01
Therapeutic protocols in immunotherapy are usually proposed following the intuition and experience of the therapist. In order to deduce such protocols mathematical modeling, optimal control and simulations are used instead of the therapist's experience. Clinical efficacy of dendritic cell (DC) vaccines to cancer treatment is still unclear, since dendritic cells face several obstacles in the host environment, such as immunosuppression and poor transference to the lymph nodes reducing the vaccine effect. In view of that, we have created a mathematical murine model to measure the effects of dendritic cell injections admitting such obstacles. In addition, the model considers a therapy given by bolus injections of small duration as opposed to a continual dose. Doses timing defines the therapeutic protocols, which in turn are improved to minimize the tumor mass by an optimal control algorithm. We intend to supplement therapist's experience and intuition in the protocol's implementation. Experimental results made on mice infected with melanoma with and without therapy agree with the model. It is shown that the dendritic cells' percentage that manages to reach the lymph nodes has a crucial impact on the therapy outcome. This suggests that efforts in finding better methods to deliver DC vaccines should be pursued.
Parallelism and optimization of numerical ocean forecasting model
NASA Astrophysics Data System (ADS)
Xu, Jianliang; Pang, Renbo; Teng, Junhua; Liang, Hongtao; Yang, Dandan
2016-10-01
According to the characteristics of Chinese marginal seas, the Marginal Sea Model of China (MSMC) has been developed independently in China. Because the model requires long simulation time, as a routine forecasting model, the parallelism of MSMC becomes necessary to be introduced to improve the performance of it. However, some methods used in MSMC, such as Successive Over Relaxation (SOR) algorithm, are not suitable for parallelism. In this paper, methods are developedto solve the parallel problem of the SOR algorithm following the steps as below. First, based on a 3D computing grid system, an automatic data partition method is implemented to dynamically divide the computing grid according to computing resources. Next, based on the characteristics of the numerical forecasting model, a parallel method is designed to solve the parallel problem of the SOR algorithm. Lastly, a communication optimization method is provided to avoid the cost of communication. In the communication optimization method, the non-blocking communication of Message Passing Interface (MPI) is used to implement the parallelism of MSMC with complex physical equations, and the process of communication is overlapped with the computations for improving the performance of parallel MSMC. The experiments show that the parallel MSMC runs 97.2 times faster than the serial MSMC, and root mean square error between the parallel MSMC and the serial MSMC is less than 0.01 for a 30-day simulation (172800 time steps), which meets the requirements of timeliness and accuracy for numerical ocean forecasting products.
Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model
NASA Astrophysics Data System (ADS)
Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung
2017-08-01
This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.
Optimization of atmospheric transport models on HPC platforms
NASA Astrophysics Data System (ADS)
de la Cruz, Raúl; Folch, Arnau; Farré, Pau; Cabezas, Javier; Navarro, Nacho; Cela, José María
2016-12-01
The performance and scalability of atmospheric transport models on high performance computing environments is often far from optimal for multiple reasons including, for example, sequential input and output, synchronous communications, work unbalance, memory access latency or lack of task overlapping. We investigate how different software optimizations and porting to non general-purpose hardware architectures improve code scalability and execution times considering, as an example, the FALL3D volcanic ash transport model. To this purpose, we implement the FALL3D model equations in the WARIS framework, a software designed from scratch to solve in a parallel and efficient way different geoscience problems on a wide variety of architectures. In addition, we consider further improvements in WARIS such as hybrid MPI-OMP parallelization, spatial blocking, auto-tuning and thread affinity. Considering all these aspects together, the FALL3D execution times for a realistic test case running on general-purpose cluster architectures (Intel Sandy Bridge) decrease by a factor between 7 and 40 depending on the grid resolution. Finally, we port the application to Intel Xeon Phi (MIC) and NVIDIA GPUs (CUDA) accelerator-based architectures and compare performance, cost and power consumption on all the architectures. Implications on time-constrained operational model configurations are discussed.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Differential Evolution Optimization of Diffraction Pattern Models with Clones
NASA Astrophysics Data System (ADS)
Lynch, Vickie; Buergi, Hans-Beat; Hauser, Juerg; Hoffmann, Christina; Michels-Clark, Tara; Miller, Steve
2010-03-01
With the TOPAZ single crystal diffractometer at the Spallation Neutron Source in operation, new computational methods are needed for analyzing the three-dimensional diffraction patterns recorded from disordered crystals. One such method uses a combination of differential evolution and Monte Carlo techniques to model the disorder and analyze the diffuse scattering. Software implementing this method originally developed at the University of Bern has been modified to use TeraGrid high performance computers. Since different model crystals produced from a given set of disorder parameters differ, the error of the fit to the data of such clones differs too. The performance of the differential evolution optimization in improving the fit of the model is being tested by using a variable number of clones for each individual gene set in a generation of differential evolution. A reference dataset with minimal noise has been generated for this purpose. Results of tests varying the number of clones will be presented.
Optimal Filtering in Mass Transport Modeling From Satellite Gravimetry Data
NASA Astrophysics Data System (ADS)
Ditmar, P.; Hashemi Farahani, H.; Klees, R.
2011-12-01
Monitoring natural mass transport in the Earth's system, which has marked a new era in Earth observation, is largely based on the data collected by the GRACE satellite mission. Unfortunately, this mission is not free from certain limitations, two of which are especially critical. Firstly, its sensitivity is strongly anisotropic: it senses the north-south component of the mass re-distribution gradient much better than the east-west component. Secondly, it suffers from a trade-off between temporal and spatial resolution: a high (e.g., daily) temporal resolution is only possible if the spatial resolution is sacrificed. To make things even worse, the GRACE satellites enter occasionally a phase when their orbit is characterized by a short repeat period, which makes it impossible to reach a high spatial resolution at all. A way to mitigate limitations of GRACE measurements is to design optimal data processing procedures, so that all available information is fully exploited when modeling mass transport. This implies, in particular, that an unconstrained model directly derived from satellite gravimetry data needs to be optimally filtered. In principle, this can be realized with a Wiener filter, which is built on the basis of covariance matrices of noise and signal. In practice, however, a compilation of both matrices (and, therefore, of the filter itself) is not a trivial task. To build the covariance matrix of noise in a mass transport model, it is necessary to start from a realistic model of noise in the level-1B data. Furthermore, a routine satellite gravimetry data processing includes, in particular, the subtraction of nuisance signals (for instance, associated with atmosphere and ocean), for which appropriate background models are used. Such models are not error-free, which has to be taken into account when the noise covariance matrix is constructed. In addition, both signal and noise covariance matrices depend on the type of mass transport processes under
Optimal control of CPR procedure using hemodynamic circulation model
Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok
2007-12-25
A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.
A comparison of motor submodels in the optimal control model
NASA Technical Reports Server (NTRS)
Lancraft, R. E.; Kleinman, D. L.
1978-01-01
Properties of several structural variations in the neuromotor interface portion of the optimal control model (OCM) are investigated. For example, it is known that commanding control-rate introduces an open-loop pole at S=O and will generate low frequency phase and magnitude characteristics similar to experimental data. However, this gives rise to unusually high sensitivities with respect to motor and sensor noise-ratios, thereby reducing the models' predictive capabilities. Relationships for different motor submodels are discussed to show sources of these sensitivities. The models investigated include both pseudo motor-noise and actual (system driving) motor-noise characterizations. The effects of explicit proprioceptive feedback in the OCM is also examined. To show graphically the effects of each submodel on system outputs, sensitivity studies are included, and compared to data obtained from other tests.
Model reduction for chemical kinetics: An optimization approach
Petzold, L.; Zhu, W.
1999-04-01
The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Parameter Optimization for the Gaussian Model of Folded Proteins
NASA Astrophysics Data System (ADS)
Erman, Burak; Erkip, Albert
2000-03-01
Recently, we proposed an analytical model of protein folding (B. Erman, K. A. Dill, J. Chem. Phys, 112, 000, 2000) and showed that this model successfully approximates the known minimum energy configurations of two dimensional HP chains. All attractions (covalent and non-covalent) as well as repulsions were treated as if the monomer units interacted with each other through linear spring forces. Since the governing potential of the linear springs are derived from a Gaussian potential, the model is called the ''Gaussian Model''. The predicted conformations from the model for the hexamer and various 9mer sequences all lie on the square lattice, although the model does not contain information about the lattice structure. Results of predictions for chains with 20 or more monomers also agreed well with corresponding known minimum energy lattice structures. However, these predicted conformations did not lie exactly on the square lattice. In the present work, we treat the specific problem of optimizing the potentials (the strengths of the spring constants) so that the predictions are in better agreement with the known minimum energy structures.
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
Automated Finite Element Modeling of Wing Structures for Shape Optimization
NASA Technical Reports Server (NTRS)
Harvey, Michael Stephen
1993-01-01
The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
H2-optimal control with generalized state-space models for use in control-structure optimization
NASA Technical Reports Server (NTRS)
Wette, Matt
1991-01-01
Several advances are provided solving combined control-structure optimization problems. The author has extended solutions from H2 optimal control theory to the use of generalized state space models. The generalized state space models preserve the sparsity inherent in finite element models and hence provide some promise for handling very large problems. Also, expressions for the gradient of the optimal control cost are derived which use the generalized state space models.
H2-optimal control with generalized state-space models for use in control-structure optimization
NASA Technical Reports Server (NTRS)
Wette, Matt
1991-01-01
Several advances are provided solving combined control-structure optimization problems. The author has extended solutions from H2 optimal control theory to the use of generalized state space models. The generalized state space models preserve the sparsity inherent in finite element models and hence provide some promise for handling very large problems. Also, expressions for the gradient of the optimal control cost are derived which use the generalized state space models.
Optimized diagnostic model combination for improving diagnostic accuracy
NASA Astrophysics Data System (ADS)
Kunche, S.; Chen, C.; Pecht, M. G.
Identifying the most suitable classifier for diagnostics is a challenging task. In addition to using domain expertise, a trial and error method has been widely used to identify the most suitable classifier. Classifier fusion can be used to overcome this challenge and it has been widely known to perform better than single classifier. Classifier fusion helps in overcoming the error due to inductive bias of various classifiers. The combination rule also plays a vital role in classifier fusion, and it has not been well studied which combination rules provide the best performance during classifier fusion. Good combination rules will achieve good generalizability while taking advantage of the diversity of the classifiers. In this work, we develop an approach for ensemble learning consisting of an optimized combination rule. The generalizability has been acknowledged to be a challenge for training a diverse set of classifiers, but it can be achieved by an optimal balance between bias and variance errors using the combination rule in this paper. Generalizability implies the ability of a classifier to learn the underlying model from the training data and to predict the unseen observations. In this paper, cross validation has been employed during performance evaluation of each classifier to get an unbiased performance estimate. An objective function is constructed and optimized based on the performance evaluation to achieve the optimal bias-variance balance. This function can be solved as a constrained nonlinear optimization problem. Sequential Quadratic Programming based optimization with better convergence property has been employed for the optimization. We have demonstrated the applicability of the algorithm by using support vector machine and neural networks as classifiers, but the methodology can be broadly applicable for combining other classifier algorithms as well. The method has been applied to the fault diagnosis of analog circuits. The performance of the proposed
Multi-Objective Optimization of the Tank Model
NASA Astrophysics Data System (ADS)
Tanakamaru, H.
2002-12-01
The Tank Model is a conceptual rainfall-runoff model developed by Sugawara, which has 16 parameters including 4 initial storage depths. In this study, parameter optimization of the Tank Model using the multi-objectives is investigated. The root mean square error and the root mean square of relative error of simulated daily runoff hydrograph, which show obvious trade-off relationship, are adopted as objective functions and these objectives are minimized under the constraint of the permitted water balance error. The classical weighting method is applied to obtain discrete Pareto optimal solutions of the multi-objective problem. The problem is converted into a single-objective problem by the weighting method. The SCE-UA single-objective global optimization algorithm (Duan et al., 1992) is applied here for solving the problem. Such a classical method is not suited to approximate the continuous Pareto space because many times of single-objective optimization are required (i.e. a huge number of function evaluations is required) to obtain a lot of discrete Pareto solutions. To overcome the difficulties, effective and efficient new approaches such as the MOCOM-UA method (Yapo et al., 1998) have been developed. Here, a new simple approach based on the random search algorithm is developed to approximate the entire Pareto space. In this approach, a large number of new parameter sets is generated randomly in parameter ranges formed by original discrete Pareto solutions and function evaluations of generated parameter sets are conducted. After removing solutions that do not satisfy constraints, non-dominated solutions (Pareto ranking 1) are selected from generated solutions and original discrete solutions. The calibration study was done by using hydrological data of the Eigenji Dam Basin, Japan and results show that combination of the weighting method and the random search algorithm is effective and efficient to approximate the entire Pareto space of the multi-objective problem.
Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...
Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...
WE-D-BRE-04: Modeling Optimal Concurrent Chemotherapy Schedules
Jeong, J; Deasy, J O
2014-06-15
Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-kill was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation.
Modeling and optimization of a hybrid solar combined cycle (HYCS)
NASA Astrophysics Data System (ADS)
Eter, Ahmad Adel
2011-12-01
The main objective of this thesis is to investigate the feasibility of integrating concentrated solar power (CSP) technology with the conventional combined cycle technology for electric generation in Saudi Arabia. The generated electricity can be used locally to meet the annual increasing demand. Specifically, it can be utilized to meet the demand during the hours 10 am-3 pm and prevent blackout hours, of some industrial sectors. The proposed CSP design gives flexibility in the operation system. Since, it works as a conventional combined cycle during night time and it switches to work as a hybrid solar combined cycle during day time. The first objective of the thesis is to develop a thermo-economical mathematical model that can simulate the performance of a hybrid solar-fossil fuel combined cycle. The second objective is to develop a computer simulation code that can solve the thermo-economical mathematical model using available software such as E.E.S. The developed simulation code is used to analyze the thermo-economic performance of different configurations of integrating the CSP with the conventional fossil fuel combined cycle to achieve the optimal integration configuration. This optimal integration configuration has been investigated further to achieve the optimal design of the solar field that gives the optimal solar share. Thermo-economical performance metrics which are available in the literature have been used in the present work to assess the thermo-economic performance of the investigated configurations. The economical and environmental impact of integration CSP with the conventional fossil fuel combined cycle are estimated and discussed. Finally, the optimal integration configuration is found to be solarization steam side in conventional combined cycle with solar multiple 0.38 which needs 29 hectare and LEC of HYCS is 63.17 $/MWh under Dhahran weather conditions.
Decentralized optimization across independent decision makers with incomplete models
NASA Astrophysics Data System (ADS)
Inalhan, Gokhan
Following the advances in electronics and communications technology in the last three decades, a new paradigm for large-scale dynamic systems emerged. In this paradigm, groups of independent dynamic systems, such as unmanned air vehicles or spacecraft, act as a cooperative unit for a diverse set of applications in remote sensing, exploration, and imaging. These systems have been envisioned to provide highly flexible and reconfigurable structures that use individual autonomy to respond to changing environments and operations. The main aim of this research has been to design methods and algorithms to enable efficient operations for such large-scale dynamic systems when a centralized decision-maker cannot or does not exist. Towards this end, a decentralized optimization method and a coordination algorithm have been developed. The decentralized optimization framework exploits a structure inherent in the problem formulation in which each decision maker has a mathematical model that captures the local dynamics and interconnecting constraints. A globally convergent algorithm based on sequential local optimizations is presented. Under the assumptions of differentiability and the linear independence constraint qualification, we show that the method results in global convergence to feasible Nash solutions that satisfy the Kuhn-Tucker necessary conditions for Pareto-optimality. Analysis of the second order sufficiency conditions provide insight to structures and solutions with strong local convexity or weak interconnections which guarantee local Pareto-optimality. This methodology is applied to decentralized coordination problems from the aerospace and the operations research fields. We demonstrate the algorithm numerically via a multiple unmanned air vehicle system, with kinematic aircraft models, coordinating in a common airspace with separation requirements between the aircraft. In addition, analytic solutions are provided for decentralized inventory control in simple
Image quality optimization using an x-ray spectra model-based optimization method
NASA Astrophysics Data System (ADS)
Gordon, Clarence L., III
2000-04-01
Several x-ray parameters must be optimized to deliver exceptional fluoroscopic and radiographic x-ray Image Quality (IQ) for the large variety of clinical procedures and patient sizes performed on a cardiac/vascular x-ray system. The optimal choice varies as a function of the objective of the medical exam, the patient size, local regulatory requirements, and the operational range of the system. As a result, many distinct combinations are required to successfully operate the x-ray system and meet the clinical imaging requirements. Presented here, is a new, configurable and automatic method to perform x-ray technique and IQ optimization using an x-ray spectral model based simulation of the x-ray generation and detection system. This method incorporates many aspects/requirements of the clinical environment, and a complete description of the specific x-ray system. First, the algorithm requires specific inputs: clinically relevant performance objectives, system hardware configuration, and system operational range. Second, the optimization is performed for a Primary Optimization Strategy versus patient thickness, e.g. maximum contrast. Finally, in the case where there are multiple operating points, which meet the Primary Optimization Strategy, a Secondary Optimization Strategy, e.g. to minimize patient dose, is utilized to determine the final set of optimal x-ray techniques.
Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami
2015-01-01
6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach.
Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami
2015-01-01
6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448
Modeling, hybridization, and optimal charging of electrical energy storage systems
NASA Astrophysics Data System (ADS)
Parvini, Yasha
The rising rate of global energy demand alongside the dwindling fossil fuel resources has motivated research for alternative and sustainable solutions. Within this area of research, electrical energy storage systems are pivotal in applications including electrified vehicles, renewable power generation, and electronic devices. The approach of this dissertation is to elucidate the bottlenecks of integrating supercapacitors and batteries in energy systems and propose solutions by the means of modeling, control, and experimental techniques. In the first step, the supercapacitor cell is modeled in order to gain fundamental understanding of its electrical and thermal dynamics. The dependence of electrical parameters on state of charge (SOC), current direction and magnitude (20-200 A), and temperatures ranging from -40°C to 60°C was embedded in this computationally efficient model. The coupled electro-thermal model was parameterized using specifically designed temporal experiments and then validated by the application of real world duty cycles. Driving range is one of the major challenges of electric vehicles compared to combustion vehicles. In order to shed light on the benefits of hybridizing a lead-acid driven electric vehicle via supercapacitors, a model was parameterized for the lead-acid battery and combined with the model already developed for the supercapacitor, to build the hybrid battery-supercapacitor model. A hardware in the loop (HIL) setup consisting of a custom built DC/DC converter, micro-controller (muC) to implement the power management strategy, 12V lead-acid battery, and a 16.2V supercapacitor module was built to perform the validation experiments. Charging electrical energy storage systems in an efficient and quick manner, motivated to solve an optimal control problem with the objective of maximizing the charging efficiency for supercapacitors, lead-acid, and lithium ion batteries. Pontryagins minimum principle was used to solve the problems
Qualitative optimization of image processing systems using random set modeling
NASA Astrophysics Data System (ADS)
Kelly, Patrick A.; Derin, Haluk; Vaidya, Priya G.
2000-08-01
Many decision-making systems involve image processing that converts input sensor data into output images having desirable features. Typically, the system user selects some processing parameters. The processor together with the input image can then be viewed as a system that maps the processing parameters into output features. However, the most significant output features often are not numerical quantities, but instead are subjective measures of image quality. It can be a difficult task for a user to find the processing parameters that give the 'best' output. We wish to automate this qualitative optimization task. The key to this is incorporation linguistic operating rules and qualitative output parameters in a numerical optimization scheme. In this paper, we use the test system of input parameter selection for 2D Wiener filtering to restore noisy and blurred images. Operating rules represented with random sets are used to generate a nominal input-output system model, which is then used to select initial Wiener filter input parameters. Whenthe nominally optimal Wiener filter is applied to an observed image, the operator's assessment of output image quality is used in an adaptive filtering algorithm to adjust the model and select new input parameters. Test on several images have confirmed that with a few such iterations, a significant improvement in output quality is achieved.
Optimal aeroassisted coplanar orbital transfer using an energy model
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Taylor, Deborah B.
1989-01-01
The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.
Optimizing medical resources for spaceflight using the integrated medical model.
Minard, Charles G; de Carvalho, Mary Freire; Iyengar, M Sriram
2011-09-01
Efficient allocation of medical resources for spaceflight is important for crew health. The Integrated Medical Model (IMM) was developed to estimate medical event occurrences, mitigation, and resource requirements. An optimization module was created for IMM that uses a systematic process of elimination and preservation to maximize crew health outcomes subject to resource constraints. A maximum medical kit is identified and resources are eliminated according to their relative impact on outcomes of interest. Additional steps allow opportunities for resources to be added back into the medical kit if possible. The effectiveness of the module is demonstrated under six alternative mission profiles by optimizing the medical kit to maximize the expected Crew Health Index (CHI), and comparisons are made with minimum and maximum kits. The optimum and maximum kits had similar expected CHI, but CHI was more variable for the optimum kit. The maximum kit resulted in the best outcomes, but required at least 13.7 times the mass of the optimum kit and 26.6 times the volume. The largest difference in mean CHI between the optimum and maximum kits occurred for four crewmembers on a 180-d mission (91.1% vs. 95.4%). The optimization module may be used as an objective tool to assist with the efficient allocation of medical resources for spaceflight. The module provides a flexible algorithm that may be used in conjunction with the IMM model to assist in medical kit requirements and design.
Modeling the Drosophila melanogaster circadian oscillator via phase optimization.
Bagheri, Neda; Lawson, Michael J; Stelling, Jörg; Doyle, Francis J
2008-12-01
The circadian clock, which coordinates daily physiological behaviors of most organisms, maintains endogenous (approximately 24 h) cycles and simultaneously synchronizes to the 24-h environment due to its inherent robustness to environmental perturbations coupled with a sensitivity to specific environmental stimuli. In this study, the authors develop a detailed mathematical model that characterizes the Drosophila melanogaster circadian network. This model incorporates the transcriptional regulation of period, timeless, vrille , PAR-domain protein 1, and clock gene and protein counterparts. The interlocked positive and negative feedback loops that arise from these clock components are described primarily through mass-action kinetics (with the exception of regulated gene expression) and without the use of explicit time delays. System parameters are estimated via a genetic algorithm-based optimization of a cost function that relies specifically on circadian phase behavior since amplitude measurements are often noisy and do not account for the unique entrainment features that define circadian oscillations. Resulting simulations of this 29-state ordinary differential equation model comply with fitted wild-type experimental data, demonstrating accurate free-running (23.24-h periodic) and entrained (24-h periodic) circadian dynamics. This model also predicts unfitted mutant phenotype behavior by illustrating short and long periodicity, robust oscillations, and arrhythmicity. This mechanistic model also predicts light-induced circadian phase resetting (as described by the phase-response curve) that are in line with experimental observations.
Modeling the Drosophila melanogaster Circadian Oscillator via Phase Optimization
Bagheri, Neda; Lawson, Michael J.; Stelling, Jörg; Doyle, Francis J.
2009-01-01
The circadian clock, which coordinates daily physiological behaviors of most organisms, maintains endogenous (approximately 24 h) cycles and simultaneously synchronizes to the 24-h environment due to its inherent robustness to environmental perturbations coupled with a sensitivity to specific environmental stimuli. In this study, the authors develop a detailed mathematical model that characterizes the Drosophila melanogaster circadian network. This model incorporates the transcriptional regulation of period, time-less, vrille, PAR-domain protein 1, and clock gene and protein counterparts. The interlocked positive and negative feedback loops that arise from these clock components are described primarily through mass-action kinetics (with the exception of regulated gene expression) and without the use of explicit time delays. System parameters are estimated via a genetic algorithm-based optimization of a cost function that relies specifically on circadian phase behavior since amplitude measurements are often noisy and do not account for the unique entrainment features that define circadian oscillations. Resulting simulations of this 29-state ordinary differential equation model comply with fitted wild-type experimental data, demonstrating accurate free-running (23.24-h periodic) and entrained (24-h periodic) circadian dynamics. This model also predicts unfitted mutant phenotype behavior by illustrating short and long periodicity, robust oscillations, and arrhythmicity. This mechanistic model also predicts light-induced circadian phase resetting (as described by the phase-response curve) that are in line with experimental observations. PMID:19060261
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
Endovascular magnetically guided robots: navigation modeling and optimization.
Arcese, Laurent; Fruchard, Matthieu; Ferreira, Antoine
2012-04-01
This paper deals with the benefits of using a nonlinear model-based approach for controlling magnetically guided therapeutic microrobots in the cardiovascular system. Such robots used for minimally invasive interventions consist of a polymer binded aggregate of nanosized ferromagnetic particles functionalized by drug-conjugated micelles. The proposed modeling addresses wall effects (blood velocity in minor and major vessels' bifurcations, pulsatile blood flow and vessel walls, and effect of robot-to-vessel diameter ratio), wall interactions (contact, van der Waals, electrostatic, and steric forces), non-Newtonian behavior of blood, and different driving designs as well. Despite nonlinear and thorough, the resulting model can both be exploited to improve the targeting ability and be controlled in closed-loop using nonlinear control theory tools. In particular, we infer from the model an optimization of both the designs and the reference trajectory to minimize the control efforts. Efficiency and robustness to noise and model parameter's uncertainties are then illustrated through simulations results for a bead pulled robot of radius 250 μm in a small artery.
Optimizing nanomedicine pharmacokinetics using physiologically based pharmacokinetics modelling.
Moss, Darren Michael; Siccardi, Marco
2014-09-01
The delivery of therapeutic agents is characterized by numerous challenges including poor absorption, low penetration in target tissues and non-specific dissemination in organs, leading to toxicity or poor drug exposure. Several nanomedicine strategies have emerged as an advanced approach to enhance drug delivery and improve the treatment of several diseases. Numerous processes mediate the pharmacokinetics of nanoformulations, with the absorption, distribution, metabolism and elimination (ADME) being poorly understood and often differing substantially from traditional formulations. Understanding how nanoformulation composition and physicochemical properties influence drug distribution in the human body is of central importance when developing future treatment strategies. A helpful pharmacological tool to simulate the distribution of nanoformulations is represented by physiologically based pharmacokinetics (PBPK) modelling, which integrates system data describing a population of interest with drug/nanoparticle in vitro data through a mathematical description of ADME. The application of PBPK models for nanomedicine is in its infancy and characterized by several challenges. The integration of property-distribution relationships in PBPK models may benefit nanomedicine research, giving opportunities for innovative development of nanotechnologies. PBPK modelling has the potential to improve our understanding of the mechanisms underpinning nanoformulation disposition and allow for more rapid and accurate determination of their kinetics. This review provides an overview of the current knowledge of nanomedicine distribution and the use of PBPK modelling in the characterization of nanoformulations with optimal pharmacokinetics. © 2014 The British Pharmacological Society.
Optimal allocation of computational resources in hydrogeological models under uncertainty
NASA Astrophysics Data System (ADS)
Moslehi, Mahsa; Rajagopal, Ram; de Barros, Felipe P. J.
2015-09-01
Flow and transport models in heterogeneous geological formations are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting subsurface flow and transport often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field parameter representing hydrogeological characteristics of the aquifer. The physical resolution (e.g. spatial grid resolution) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We develop an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model prediction and physical errors corresponding to numerical grid resolution. Computational resources are allocated by considering the overall error based on a joint statistical-numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The performance of the framework is tested against computationally extensive simulations of flow and transport in spatially heterogeneous aquifers. Results show that modelers can achieve optimum physical and statistical resolutions while keeping a minimum error for a given computational time. The physical and statistical resolutions obtained through our analysis yield lower computational costs when compared to the results obtained with prevalent recommendations in the literature. Lastly, we highlight the significance of the geometrical characteristics of the contaminant source zone on the
Traveling waves in an optimal velocity model of freeway traffic
NASA Astrophysics Data System (ADS)
Berg, Peter; Woods, Andrew
2001-03-01
Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137].
Traveling waves in an optimal velocity model of freeway traffic.
Berg, P; Woods, A
2001-03-01
Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137].
Monte Carlo modeling and optimization of buffer gas positron traps
NASA Astrophysics Data System (ADS)
Marjanović, Srđan; Petrović, Zoran Lj
2017-02-01
Buffer gas positron traps have been used for over two decades as the prime source of slow positrons enabling a wide range of experiments. While their performance has been well understood through empirical studies, no theoretical attempt has been made to quantitatively describe their operation. In this paper we apply standard models as developed for physics of low temperature collision dominated plasmas, or physics of swarms to model basic performance and principles of operation of gas filled positron traps. The Monte Carlo model is equipped with the best available set of cross sections that were mostly derived experimentally by using the same type of traps that are being studied. Our model represents in realistic geometry and fields the development of the positron ensemble from the initial beam provided by the solid neon moderator through voltage drops between the stages of the trap and through different pressures of the buffer gas. The first two stages employ excitation of N2 with acceleration of the order of 10 eV so that the trap operates under conditions when excitation of the nitrogen reduces the energy of the initial beam to trap the positrons without giving them a chance to become annihilated following positronium formation. The energy distribution function develops from the assumed distribution leaving the moderator, it is accelerated by the voltage drops and forms beams at several distinct energies. In final stages the low energy loss collisions (vibrational excitation of CF4 and rotational excitation of N2) control the approach of the distribution function to a Maxwellian at room temperature but multiple non-Maxwellian groups persist throughout most of the thermalization. Optimization of the efficiency of the trap may be achieved by changing the pressure and voltage drops and also by selecting to operate in a two stage mode. The model allows quantitative comparisons and test of optimization as well as development of other properties.
Optimizing a canine survival model of orthotopic lung transplantation.
Farivar, A S; Yunusov, M Y; Chen, P; Leone, R J; Madtes, D K; Kuhr, C S; Spector, M R; Abrams, K; Hwang, B; Nash, R A; Mulligan, M S
2006-06-01
While acute models of orthotopic lung transplantation have been described in dogs, the technical considerations of developing a survival model in this species have not been elaborated. Herein, we describe optimization of a canine survival model of orthotopic lung transplantation. Protocols of orthotopic left lung transplantation and single lung ventilation were established in acute experiments (n=9). Four dogs, serving as controls, received autologous, orthotopic lung transplants. Allogeneic transplants were performed in 16 DLA-identical and 16 DLA-mismatched unrelated recipient dogs. Selective right lung ventilation was utilized in all animals. A Malecot tube was left in the pleural space connected to a Heimlich valve for up to 24 hours. To date, animals have been followed up to 24 months by chest radiography, pulmonary function tests, bronchoscopy with lavage, and open biopsies. Long-term survival was achieved in 34/36 animals. Two recipients died intraoperatively secondary to cardiac arrest. All animals were extubated on the operating table, and in all cases the chest tube was removed within 24 hours. Major complications included thrombosis of the pulmonary artery and subcritical stenosis of bronchial anastamosis. One recipient underwent successful treatment of a small bowel intussusception. We report our experience in developing a survival canine model of orthotopic single lung transplantation. While short-term survival following canine lung transplantation is achievable, we report particular considerations that facilitate animal comfort, early extubation, and lung reexpansion in the immediate postoperative period, further optimizing use of this species for experimental modeling of long-term complications after lung transplantation.
Multisource modeling of flattening filter free (FFF) beam and the optimization of model parameters
Cho, Woong; Kielar, Kayla N.; Mok, Ed; Xing, Lei; Park, Jeong-Hoon; Jung, Won-Gyun; Suh, Tae-Suk
2011-01-01
Purpose: With the introduction of flattening filter free (FFF) linear accelerators to radiation oncology, new analytical source models for a FFF beam applicable to current treatment planning systems is needed. In this work, a multisource model for the FFF beam and the optimization of involved model parameters were designed. Methods: The model is based on a previous three source model proposed by Yang [“A three-source model for the calculation of head scatter factors,” Med. Phys. 29, 2024–2033 (2002)]. An off axis ratio (OAR) of photon fluence was introduced to the primary source term to generate cone shaped profiles. The parameters of the source model were determined from measured head scatter factors using a line search optimization technique. The OAR of the photon fluence was determined from a measured dose profile of a 40×40 cm2 field size with the same optimization technique, but a new method to acquire gradient terms for OARs was developed to enhance the speed of the optimization process. The improved model was validated with measured dose profiles from 3×3 to 40×40 cm2 field sizes at 6 and 10 MV from a TrueBeam™ STx linear accelerator. Furthermore, planar dose distributions for clinically used radiation fields were also calculated and compared to measurements using a 2D array detector using the gamma index method. Results: All dose values for the calculated profiles agreed with the measured dose profiles within 0.5% at 6 and 10 MV beams, except for some low dose regions for larger field sizes. A slight overestimation was seen in the lower penumbra region near the field edge for the large field sizes by 1%–4%. The planar dose calculations showed comparable passing rates (>98%) when the criterion of the gamma index method was selected to be 3%∕3 mm. Conclusions: The developed source model showed good agreements between measured and calculated dose distributions. The model is easily applicable to any other linear accelerator using FFF beams as the
Regional optimization model for locating supplemental recycling depots.
Lin, Hung-Yueh; Chen, Guan-Hwa
2009-05-01
In Taiwan, vendors and businesses that sell products belonging to six classes of recyclable materials are required to provide recycling containers at their local retail stores. The integration of these private sector facilities with the recycling depots established by local authorities has the potential to significantly improve residential access to the recycling process. An optimization model is accordingly developed in this work to assist local authorities with the identification of regions that require additional recycling depots for better access and integration with private facilities. Spatial accessibility, population loading and integration efficiency indicators are applied to evaluate whether or not a geographic region is in need of new recycling depots. The program developed here uses a novel algorithm to obtain the optimal solution by a complete enumeration of all cells making up the study area. A case study of a region in Central Taiwan is presented to demonstrate the use of the proposed model and the three indicators. The case study identifies regions without recycling points, prioritizes them based on population density, and considers the option of establishing recycling centers that are able to collect multiple classes of recycling materials. The model is able to generate information suitable for the consideration of decision-makers charged with prioritizing the installation of new recycling facilities.
The role of optimization in structural model refinement
NASA Technical Reports Server (NTRS)
Lehman, L. L.
1984-01-01
To evaluate the role that optimization can play in structural model refinement, it is necessary to examine the existing environment for the structural design/structural modification process. The traditional approach to design, analysis, and modification is illustrated. Typically, a cyclical path is followed in evaluating and refining a structural system, with parallel paths existing between the real system and the analytical model of the system. The major failing of the existing approach is the rather weak link of communication between the cycle for the real system and the cycle for the analytical model. Only at the expense of much human effort can data sharing and comparative evaluation be enhanced for the two parallel cycles. Much of the difficulty can be traced to the lack of a user-friendly, rapidly reconfigurable engineering software environment for facilitating data and information exchange. Until this type of software environment becomes readily available to the majority of the engineering community, the role of optimization will not be able to reach its full potential and engineering productivity will continue to suffer. A key issue in current engineering design, analysis, and test is the definition and development of an integrated engineering software support capability. The data and solution flow for this type of integrated engineering analysis/refinement system is shown.
Stochastic optimization algorithm for inverse modeling of air pollution
NASA Astrophysics Data System (ADS)
Yeo, Kyongmin; Hwang, Youngdeok; Liu, Xiao; Kalagnanam, Jayant
2016-11-01
A stochastic optimization algorithm to estimate a smooth source function from a limited number of observations is proposed in the context of air pollution, where the source-receptor relation is given by an advection-diffusion equation. First, a smooth source function is approximated by a set of Gaussian kernels on a rectangular mesh system. Then, the generalized polynomial chaos (gPC) expansion is used to represent the model uncertainty due to the choice of the mesh system. It is shown that the convolution of gPC basis and the Gaussian kernel provides hierarchical basis functions for a spectral function estimation. The spectral inverse model is formulated as a stochastic optimization problem. We propose a regularization strategy based on the hierarchical nature of the basis polynomials. It is shown that the spectral inverse model is capable of providing a good estimate of the source function even when the number of unknown parameters (m) is much larger the number of data (n), m/n > 50.
Modelling the Fermilab Collider to determine optimal running
McCrory, E.
1994-12-01
A Monte Carlo-type model of the Fermilab Collider has been constructed, the goal of which is to accurately represent the operation of the Collider, incorporating the aspects of the facility which affect operations in order to determine how to run optimally. In particular, downtime for the various parts of the complex are parameterized and included. Also, transfer efficiencies, emittance growths, changes in the luminosity lifetime and other effects are included and randomized in a reasonable manner. This Memo is an outgrowth of TM-1878, which presented an entirely analytical model of the Collider. It produced a framework for developing intuition on the way in which the major components of the collider affect the luminosity, like the stacking rate and the shot set-up time, for example. However, without accurately including downtime effects, it is not possible to say with certainty that the analytical approach can produce accurate guidelines for optimizing the performance of the Collider. This is the goal of this analysis. We first discuss the way the model is written, describing the object-oriented approach taken in C++. The parameters of the simulation are described. Then the potential criteria for ending stores are described and analyzed. Next, a typical store and a typical week are derived. Then, a final conclusion on the best end-of-store criterion is made. Finally, ideas for future analysis are presented.
Optimal control model of arm configuration in a reaching task
NASA Astrophysics Data System (ADS)
Yamaguchi, Gary T.; Kakavand, Ali
1996-05-01
It was hypothesized that the configuration of the upper limb during a hand static positioning task could be predicted using a dynamic musculoskeletal model and an optimal control routine. Both rhesus monkey and human upper extremity models were formulated, and had seven degrees of freedom (7-DOF) and 39 musculotendon pathways. A variety of configurations were generated about a physiologically measured configuration using the dynamic models and perturbations. The pseudoinverse optimal control method was applied to compute the minimum cost C at each of the generated configurations. Cost function C is described by the Crowninshield-Brand (1981) criterion which relates C (the sum of muscle stresses squared) to the endurance time of a physiological task. The configuration with the minimum cost was compared to the configurations chosen by one monkey (four trials) and by eight human subjects (eight trials each). Results are generally good, but not for all joint angles, suggesting that muscular effort is likely to be one major factor in choosing a preferred static arm posture.
Kanai, Masahiro; Isojima, Shin; Nishinari, Katsuhiro; Tokihiro, Tetsuji
2009-05-01
In this paper, we propose the ultradiscrete optimal velocity model, a cellular-automaton model for traffic flow, by applying the ultradiscrete method for the optimal velocity model. The optimal velocity model, defined by a differential equation, is one of the most important models; in particular, it successfully reproduces the instability of high-flux traffic. It is often pointed out that there is a close relation between the optimal velocity model and the modified Korteweg-de Vries (mkdV) equation, a soliton equation. Meanwhile, the ultradiscrete method enables one to reduce soliton equations to cellular automata which inherit the solitonic nature, such as an infinite number of conservation laws, and soliton solutions. We find that the theory of soliton equations is available for generic differential equations and the simulation results reveal that the model obtained reproduces both absolutely unstable and convectively unstable flows as well as the optimal velocity model.
NASA Astrophysics Data System (ADS)
Kanai, Masahiro; Isojima, Shin; Nishinari, Katsuhiro; Tokihiro, Tetsuji
2009-05-01
In this paper, we propose the ultradiscrete optimal velocity model, a cellular-automaton model for traffic flow, by applying the ultradiscrete method for the optimal velocity model. The optimal velocity model, defined by a differential equation, is one of the most important models; in particular, it successfully reproduces the instability of high-flux traffic. It is often pointed out that there is a close relation between the optimal velocity model and the modified Korteweg-de Vries (mkdV) equation, a soliton equation. Meanwhile, the ultradiscrete method enables one to reduce soliton equations to cellular automata which inherit the solitonic nature, such as an infinite number of conservation laws, and soliton solutions. We find that the theory of soliton equations is available for generic differential equations and the simulation results reveal that the model obtained reproduces both absolutely unstable and convectively unstable flows as well as the optimal velocity model.
Multi-model groundwater-management optimization: reconciling disparate conceptual models
NASA Astrophysics Data System (ADS)
Timani, Bassel; Peralta, Richard
2015-09-01
Disagreement among policymakers often involves policy issues and differences between the decision makers' implicit utility functions. Significant disagreement can also exist concerning conceptual models of the physical system. Disagreement on the validity of a single simulation model delays discussion on policy issues and prevents the adoption of consensus management strategies. For such a contentious situation, the proposed multi-conceptual model optimization (MCMO) can help stakeholders reach a compromise strategy. MCMO computes mathematically optimal strategies that simultaneously satisfy analogous constraints and bounds in multiple numerical models that differ in boundary conditions, hydrogeologic stratigraphy, and discretization. Shadow prices and trade-offs guide the process of refining the first MCMO-developed `multi-model strategy into a realistic compromise management strategy. By employing automated cycling, MCMO is practical for linear and nonlinear aquifer systems. In this reconnaissance study, MCMO application to the multilayer Cache Valley (Utah and Idaho, USA) river-aquifer system employs two simulation models with analogous background conditions but different vertical discretization and boundary conditions. The objective is to maximize additional safe pumping (beyond current pumping), subject to constraints on groundwater head and seepage from the aquifer to surface waters. MCMO application reveals that in order to protect the local ecosystem, increased groundwater pumping can satisfy only 40 % of projected water demand increase. To explore the possibility of increasing that pumping while protecting the ecosystem, MCMO clearly identifies localities requiring additional field data. MCMO is applicable to other areas and optimization problems than used here. Steps to prepare comparable sub-models for MCMO use are area-dependent.
Model-based optimal planning of hepatic radiofrequency ablation.
Chen, Qiyong; Müftü, Sinan; Meral, Faik Can; Tuncali, Kemal; Akçakaya, Murat
2016-07-19
This article presents a model-based pre-treatment optimal planning framework for hepatic tumour radiofrequency (RF) ablation. Conventional hepatic radiofrequency (RF) ablation methods rely on pre-specified input voltage and treatment length based on the tumour size. Using these experimentally obtained pre-specified treatment parameters in RF ablation is not optimal to achieve the expected level of cell death and usually results in more healthy tissue damage than desired. In this study we present a pre-treatment planning framework that provides tools to control the levels of both the healthy tissue preservation and tumour cell death. Over the geometry of tumour and surrounding tissue, we formulate the RF ablation planning as a constrained optimization problem. With specific constraints over the temperature profile (TP) in pre-determined areas of the target geometry, we consider two different cost functions based on the history of the TP and Arrhenius index (AI) of the target location, respectively. We optimally compute the input voltage variation to minimize the damage to the healthy tissue while ensuring a complete cell death in the tumour and immediate area covering the tumour. As an example, we use a simulation of a 1D symmetric target geometry mimicking the application of single electrode RF probe. Results demonstrate that compared to the conventional methods both cost functions improve the healthy tissue preservation.
Swimming simply: Minimal models and stroke optimization for biological systems
NASA Astrophysics Data System (ADS)
Burton, Lisa; Guasto, Jeffrey S.; Stocker, Roman; Hosoi, A. E.
2012-11-01
In this talk, we examine how to represent the kinematics of swimming biological systems. We present a new method of extracting optimal curvature-space basis modes from high-speed video microscopy images of motile spermatozoa by tracking their flagellar kinematics. Using as few as two basis modes to characterize the swimmer's shape, we apply resistive force theory to build a model and predict the swimming speed and net translational and rotational displacement of a sperm cell over any given stroke. This low-order representation of motility yields a complete visualization of the system dynamics. The visualization tools provide refined initialization and intuition for global stroke optimization and improve motion planning by taking advantage of symmetries in the shape space to design a stroke that produces a desired net motion. Comparing the predicted optimal strokes to those observed experimentally enables us to rationalize biological motion by identifying possible optimization goals of the organism. This approach is applicable to a wide array of systems at both low and high Reynolds numbers. Battelle Memorial Institute and NSF.
Prehension synergies during nonvertical grasping, II: Modeling and optimization.
Pataky, Todd C; Latash, Mark L; Zatsiorsky, Vladimir M
2004-10-01
This study examines various optimization criteria as potential sources of constraints that eliminate (or at least reduce the degree of) mechanical redundancy in prehension. A model of nonvertical grasping mimicking the experimental conditions of Pataky et al. (current issue) was developed and numerically optimized. Several cost functions compared well with experimental data including energylike functions, entropylike functions, and a ''motor command'' function. A tissue deformation function failed to predict finger forces. In the prehension literature, the ''safety margin'' (SM) measure has been used to describe grasp quality. We demonstrate here that the SM is an inappropriate measure for nonvertical grasps. We introduce a new measure, the ''generalized safety margin'' (GSM), which reduces to the SM for vertical and two-digit grasps. It was found that a close-to-constant GSM accounts for many of the finger force patterns that are observed when grasping an object oriented arbitrarily with respect to the gravity field. It was hypothesized that, when determining finger forces, the CNS assumes that a grasped object is more slippery than it actually is. An ''operative friction coefficient'' of approximately 30% of the actual coefficient accounted for the offset between experimental and optimized data. The data suggest that the CNS utilizes an optimization strategy when coordinating finger forces during grasping.
Using Cotton Model Simulations to Estimate Optimally Profitable Irrigation Strategies
NASA Astrophysics Data System (ADS)
Mauget, S. A.; Leiker, G.; Sapkota, P.; Johnson, J.; Maas, S.
2011-12-01
In recent decades irrigation pumping from the Ogallala Aquifer has led to declines in saturated thickness that have not been compensated for by natural recharge, which has led to questions about the long-term viability of agriculture in the cotton producing areas of west Texas. Adopting irrigation management strategies that optimize profitability while reducing irrigation waste is one way of conserving the aquifer's water resource. Here, a database of modeled cotton yields generated under drip and center pivot irrigated and dryland production scenarios is used in a stochastic dominance analysis that identifies such strategies under varying commodity price and pumping cost conditions. This database and analysis approach will serve as the foundation for a web-based decision support tool that will help producers identify optimal irrigation treatments under specified cotton price, electricity cost, and depth to water table conditions.
Vibroacoustic optimization using a statistical energy analysis model
NASA Astrophysics Data System (ADS)
Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia
2016-08-01
In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.
Design Oriented Structural Modeling for Airplane Conceptual Design Optimization
NASA Technical Reports Server (NTRS)
Livne, Eli
1999-01-01
The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.
Proficient brain for optimal performance: the MAP model perspective
di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio
2016-01-01
Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557
Generalized PSF modeling for optimized quantitation in PET imaging
NASA Astrophysics Data System (ADS)
Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman
2017-06-01
modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.
Generalized PSF modeling for optimized quantitation in PET imaging.
Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman
2017-06-21
modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.
Optimizing complex phenotypes through model-guided multiplex genome engineering
Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.; ...
2017-05-25
Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.
Particle Swarm Optimization with Watts-Strogatz Model
NASA Astrophysics Data System (ADS)
Zhu, Zhuanghua
Particle swarm optimization (PSO) is a popular swarm intelligent methodology by simulating the animal social behaviors. Recent study shows that this type of social behaviors is a complex system, however, for most variants of PSO, all individuals lie in a fixed topology, and conflict this natural phenomenon. Therefore, in this paper, a new variant of PSO combined with Watts-Strogatz small-world topology model, called WSPSO, is proposed. In WSPSO, the topology is changed according to Watts-Strogatz rules within the whole evolutionary process. Simulation results show the proposed algorithm is effective and efficient.
[Study on optimal model of hypothetical work injury insurance scheme].
Ye, Chi-yu; Dong, Heng-jin; Wu, Yuan; Duan, Sheng-nan; Liu, Xiao-fang; You, Hua; Hu, Hui-mei; Wang, Lin-hao; Zhang, Xing; Wang, Jing
2013-12-01
To explore an optimal model of hypothetical work injury insurance scheme, which is in line with the wishes of workers, based on the problems in the implementation of work injury insurance in China and to provide useful information for relevant policy makers. Multistage cluster sampling was used to select subjects: first, 9 small, medium, and large enterprises were selected from three cities (counties) in Zhejiang Province, China according to the economic development, transportation, and cooperation; then, 31 workshops were randomly selected from the 9 enterprises. Face-to-face interviews were conducted by trained interviewers using a pre-designed questionnaire among all workers in the 31 workshops. After optimization of hypothetical work injury insurance scheme, the willingness to participate in the scheme increased from 73.87%to 80.96%; the average willingness to pay for the scheme increased from 2.21% (51.77 yuan) to 2.38% of monthly wage (54.93 Yuan); the median willingness to pay for the scheme increased from 1% to 1.2% of monthly wage, but decreased from 35 yuan to 30 yuan. The optimal model of hypothetical work injury insurance scheme covers all national and provincial statutory occupational diseases and work accidents, as well as consultations about occupational diseases. The scheme is supposed to be implemented worldwide by the National Social Security Department, without regional differences. The premium is borne by the state, enterprises, and individuals, and an independent insurance fund is kept in the lifetime personal account for each of insured individuals. The premium is not refunded in any event. Compensation for occupational diseases or work accidents is unrelated to the enterprises of the insured workers but related to the length of insurance. The insurance becomes effective one year after enrollment, while it is put into effect immediately after the occupational disease or accident occurs. The optimal model of hypothetical work injury insurance
Comment on ``Analysis of optimal velocity model with explicit delay''
NASA Astrophysics Data System (ADS)
Davis, L. C.
2002-09-01
The effect of including an explicit delay time (due to driver reaction) on the optimal velocity model is studied. For a platoon of vehicles to avoid collisions, many-vehicle simulations demonstrate that delay times must be well below the critical delay time determined by a linear analysis for the response of a single vehicle. Safe platoons require rather small delay times, substantially smaller than typical reaction times of drivers. The present results do not support the conclusion of Bando et al. [M. Bando, K. Hasebe, K. Nakanishi, and A. Nakayama, Phys. Rev. E 58, 5429 (1998)] that explicit delay plays no essential role.
Comment on "Analysis of optimal velocity model with explicit delay".
Davis, L C
2002-09-01
The effect of including an explicit delay time (due to driver reaction) on the optimal velocity model is studied. For a platoon of vehicles to avoid collisions, many-vehicle simulations demonstrate that delay times must be well below the critical delay time determined by a linear analysis for the response of a single vehicle. Safe platoons require rather small delay times, substantially smaller than typical reaction times of drivers. The present results do not support the conclusion of Bando et al. [M. Bando, K. Hasebe, K. Nakanishi, and A. Nakayama, Phys. Rev. E 58, 5429 (1998)] that explicit delay plays no essential role.
Numerical Modeling and Optimization of Warm-water Heat Sinks
NASA Astrophysics Data System (ADS)
Hadad, Yaser; Chiarot, Paul
2015-11-01
For cooling in large data-centers and supercomputers, water is increasingly replacing air as the working fluid in heat sinks. Utilizing water provides unique capabilities; for example: higher heat capacity, Prandtl number, and convection heat transfer coefficient. The use of warm, rather than chilled, water has the potential to provide increased energy efficiency. The geometric and operating parameters of the heat sink govern its performance. Numerical modeling is used to examine the influence of geometry and operating conditions on key metrics such as thermal and flow resistance. This model also facilitates studies on cooling of electronic chip hot spots and failure scenarios. We report on the optimal parameters for a warm-water heat sink to achieve maximum cooling performance.
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies.
Mathematical programming models for determining the optimal location of beehives.
Gavina, Maica Krizna A; Rabajante, Jomar F; Cervancia, Cleofas R
2014-05-01
Farmers frequently decide where to locate the colonies of their domesticated eusocial bees, especially given the following mutually exclusive scenarios: (i) there are limited nectar and pollen sources within the vicinity of the apiary that cause competition among foragers; and (ii) there are fewer pollinators compared to the number of inflorescence that may lead to suboptimal pollination of crops. We hypothesize that optimally distributing the beehives in the apiary can help address the two scenarios stated above. In this paper, we develop quantitative models (specifically using linear programming) for addressing the two given scenarios. We formulate models involving the following factors: (i) fuzzy preference of the beekeeper; (ii) number of available colonies; (iii) unknown-but-bounded strength of colonies; (iv) probabilistic carrying capacity of the plant clusters; and (v) spatial orientation of the apiary.
Resonant cavity light-emitting diodes: modeling, design, and optimization
NASA Astrophysics Data System (ADS)
Dumitrescu, Mihail M.; Sipila, Pekko; Vilokkinen, Ville; Toikkanen, L.; Melanen, Petri; Saarinen, Mika J.; Orsila, Seppo; Savolainen, Pekka; Toivonen, Mika; Pessa, Markus
2000-02-01
Monolithic top emitting resonant cavity light-emitting diodes operating in the 650 and 880 nm ranges have been prepared using solid-source molecular beam epitaxy growth. Transfer matrix based modeling together with a self- consistent model have been sued to optimize the devices' performances. The design of the layer structure and doping profile was assisted by computer simulations that enabled many device improvements. Among the most significant ones intermediate-composition barrier-reduction layers were introduced in the DBR mirrors for improving the I-V characteristics and the cavity and mirrors were detuned aiming at maximum extraction efficiency. The fabricated devices showed line widths below 15 nm, CW light power output of 8 and 22.5 mW, and external quantum efficiencies of 3 percent and 14.1 percent in the 650 nm and 880 nm ranges, respectively - while the simulations indicate significant performance improvement possibilities.
Recent developments in equivalent plate modeling for wing shape optimization
NASA Technical Reports Server (NTRS)
Livne, Eli
1993-01-01
A new technique for structural modeling of airplane wings is presented taking transverse shear effects into account. The kinematic assumptions of first order shear deformation plate theory in combination with numerical analysis based on simple polynomials which define geometry, construction and displacement approximations lead to analytical expressions for elements of the stiffness and mass matrices and load vector. Contributions from the cover skins, spar and rib caps and spar and rib webs are included as well as concentrated springs and concentrated masses. Limitations of current equivalent plate wing modeling techniques based on classical plate theory are discussed, and the improved accuracy of the new equivalent plate technique is demonstrated through comparison to finite element analysis and test results. Analytical derivatives of stiffness, mass and load terms with respect to wing shape lead to analytic sensitivities of displacements, stresses and natural modes with respect to planform shape and depth distribution. This makes the new capability an effective structural tool for wing shape optimization.
Three essays on multi-level optimization models and applications
NASA Astrophysics Data System (ADS)
Rahdar, Mohammad
The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation
Computer model for characterizing, screening, and optimizing electrolyte systems
Gering, Kevin L.
2015-06-15
Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.
Optimization model for UV-Riboflavin corneal cross-linking
NASA Astrophysics Data System (ADS)
Schumacher, S.; Wernli, J.; Scherrer, S.; Bueehler, M.; Seiler, T.; Mrochen, M.
2011-03-01
Nowadays UV-cross-linking is an established method for the treatment of keraectasia. Currently a standardized protocol is used for the cross-linking treatment. We will now present a theoretical model which predicts the number of induced crosslinks in the corneal tissue, in dependence of the Riboflavin concentration, the radiation intensity, the pre-treatment time and the treatment time. The model is developed by merging the difussion equation, the equation for the light distribution in dependence on the absorbers in the tissue and a rate equation for the polymerization process. A higher concentration of Riboflavin solution as well as a higher irradiation intensity will increase the number of induced crosslinks. However, performed stress-strain experiments which support the model showed that higher Riboflavin concentrations (> 0.125%) do not result in a further increase in stability of the corneal tissue. This is caused by the inhomogeneous distribution of induced crosslinks throughout the cornea due to the uneven absorption of the UV-light. The new model offers the possibility to optimize the treatment individually for every patient depending on their corneal thickness in terms of efficiency, saftey and treatment time.
Optimal vibration control of curved beams using distributed parameter models
NASA Astrophysics Data System (ADS)
Liu, Fushou; Jin, Dongping; Wen, Hao
2016-12-01
The design of linear quadratic optimal controller using spectral factorization method is studied for vibration suppression of curved beam structures modeled as distributed parameter models. The equations of motion for active control of the in-plane vibration of a curved beam are developed firstly considering its shear deformation and rotary inertia, and then the state space model of the curved beam is established directly using the partial differential equations of motion. The functional gains for the distributed parameter model of curved beam are calculated by extending the spectral factorization method. Moreover, the response of the closed-loop control system is derived explicitly in frequency domain. Finally, the suppression of the vibration at the free end of a cantilevered curved beam by point control moment is studied through numerical case studies, in which the benefit of the presented method is shown by comparison with a constant gain velocity feedback control law, and the performance of the presented method on avoidance of control spillover is demonstrated.
Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana
2016-09-01
The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal
Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana
2016-01-01
The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal
Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael
2016-01-01
It has been well documented that decisions made in the early stages of Conceptual and Pre-Conceptual design commit up to 80% of total Life-Cycle Cost (LCC) while engineers know the least about the product they are designing [1]. Once within Preliminary and Detailed design however, making changes to the design becomes far more difficult to enact in both cost and schedule. Primarily this has been due to a lack of detailed data usually uncovered later during the Preliminary and Detailed design phases. In our current budget-constrained environment, making decisions within Conceptual and Pre-Conceptual design which minimize LCC while meeting requirements is paramount to a program's success. Within the arena of launch vehicle design, optimizing the ascent trajectory is critical for minimizing the costs present within such concerns as propellant, aerodynamic, aeroheating, and acceleration loads while meeting requirements such as payload delivered to a desired orbit. In order to optimize the vehicle design its constraints and requirements must be known, however as the design cycle proceeds it is all but inevitable that the conditions will change. Upon that change, the previously optimized trajectory may no longer be optimal, or meet design requirements. The current paradigm for adjusting to these updates is generating point solutions for every change in the design's requirements [2]. This can be a tedious, time-consuming task as changes in virtually any piece of a launch vehicle's design can have a disproportionately large effect on the ascent trajectory, as the solution space of the trajectory optimization problem is both non-linear and multimodal [3]. In addition, an industry standard tool, Program to Optimize Simulated Trajectories (POST), requires an expert analyst to produce simulated trajectories that are feasible and optimal [4]. In a previous publication the authors presented a method for combatting these challenges [5]. In order to bring more detailed information
20nm CMP model calibration with optimized metrology data and CMP model applications
NASA Astrophysics Data System (ADS)
Katakamsetty, Ushasree; Koli, Dinesh; Yeo, Sky; Hui, Colin; Ghulghazaryan, Ruben; Aytuna, Burak; Wilson, Jeff
2015-03-01
Chemical Mechanical Polishing (CMP) is the essential process for planarization of wafer surface in semiconductor manufacturing. CMP process helps to produce smaller ICs with more electronic circuits improving chip speed and performance. CMP also helps to increase throughput and yield, which results in reduction of IC manufacturer's total production costs. CMP simulation model will help to early predict CMP manufacturing hotspots and minimize the CMP and CMP induced Lithography and Etch defects [2]. In the advanced process nodes, conventional dummy fill insertion for uniform density is not able to address all the CMP short-range, long-range, multi-layer stacking and other effects like pad conditioning, slurry selectivity, etc. In this paper, we present the flow for 20nm CMP modeling using Mentor Graphics CMP modeling tools to build a multilayer Cu-CMP model and study hotspots. We present the inputs required for good CMP model calibration, challenges faced with metrology collections and techniques to optimize the wafer cost. We showcase the CMP model validation results and the model applications to predict multilayer topography accumulation affects for hotspot detection. We provide the flow for early detection of CMP hotspots with Calibre CMPAnalyzer to improve Design-for-Manufacturability (DFM) robustness.
Optimal subgrid scheme for shell models of turbulence
NASA Astrophysics Data System (ADS)
Biferale, Luca; Mailybaev, Alexei A.; Parisi, Giorgio
2017-04-01
We discuss a theoretical framework to define an optimal subgrid closure for shell models of turbulence. The closure is based on the ansatz that consecutive shell multipliers are short-range correlated, following the third hypothesis of Kolmogorov formulated for similar quantities for the original three-dimensional Navier-Stokes turbulence. We also propose a series of systematic approximations to the optimal model by assuming different degrees of correlations across scales among amplitudes and phases of consecutive multipliers. We show numerically that such low-order closures work well, reproducing all known properties of the large-scale dynamics including anomalous scaling. We found small but systematic discrepancies only for a range of scales close to the subgrid threshold, which do not tend to disappear by increasing the order of the approximation. We speculate that the lack of convergence might be due to a structural instability, at least for the evolution of very fast degrees of freedom at small scales. Connections with similar problems for large eddy simulations of the three-dimensional Navier-Stokes equations are also discussed.
Model-driven optimization of multicomponent self-assembly processes.
Korevaar, Peter A; Grenier, Christophe; Markvoort, Albert J; Schenning, Albertus P H J; de Greef, Tom F A; Meijer, E W
2013-10-22
Here, we report an engineering approach toward multicomponent self-assembly processes by developing a methodology to circumvent spurious, metastable assemblies. The formation of metastable aggregates often hampers self-assembly of molecular building blocks into the desired nanostructures. Strategies are explored to master the pathway complexity and avoid off-pathway aggregates by optimizing the rate of assembly along the correct pathway. We study as a model system the coassembly of two monomers, the R- and S-chiral enantiomers of a π-conjugated oligo(p-phenylene vinylene) derivative. Coassembly kinetics are analyzed by developing a kinetic model, which reveals the initial assembly of metastable structures buffering free monomers and thereby slows the formation of thermodynamically stable assemblies. These metastable assemblies exert greater influence on the thermodynamically favored self-assembly pathway if the ratio between both monomers approaches 1:1, in agreement with experimental results. Moreover, competition by metastable assemblies is highly temperature dependent and hampers the assembly of equilibrium nanostructures most effectively at intermediate temperatures. We demonstrate that the rate of the assembly process may be optimized by tuning the cooling rate. Finally, it is shown by simulation that increasing the driving force for assembly stepwise by changing the solvent composition may circumvent metastable pathways and thereby force the assembly process directly into the correct pathway.
A new mathematical model in space optimization: A case study
NASA Astrophysics Data System (ADS)
Abdullah, Kamilah; Kamis, Nor Hanimah; Sha'ari, Nor Shahida; Muhammad Halim, Nurul Suhada; Hashim, Syaril Naqiah
2013-04-01
Most of higher education institutions provide certain area known as learning centre for their students to study or having group discussions. However, some of the learning centers are not provided by optimum number of tables and seats to accommodate the students sufficiently. This study proposed a new mathematical model in optimizing the number of tables and seats at Laman Najib, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA (UiTM) Shah Alam. An improvement of space capacity with maximum number of students who can facilitate the Laman Najib at the same time has been made by considering the type and size of tables that are appropriate for student's discussions. Our finding is compared with the result of Simplex method of linear programming to ensure that our new model is valid and consistent with other existing approaches. As a conclusion, we found that the round-type tables with six seats provide the maximum number of students who can use Laman Najib for their discussions or group studying. Both methods are also practical to use as alternative approaches in solving other space optimization problems.
Optimization of the artificial urinary sphincter: modelling and experimental validation
NASA Astrophysics Data System (ADS)
Marti, Florian; Leippold, Thomas; John, Hubert; Blunschi, Nadine; Müller, Bert
2006-03-01
The artificial urinary sphincter should be long enough to prevent strangulation effects of the urethral tissue and short enough to avoid the improper dissection of the surrounding tissue. To optimize the sphincter length, the empirical three-parameter urethra compression model is proposed based on the mechanical properties of the urethra: wall pressure, tissue response rim force and sphincter periphery length. In vitro studies using explanted animal or human urethras and different artificial sphincters demonstrate its applicability. The pressure of the sphincter to close the urethra is shown to be a linear function of the bladder pressure. The force to close the urethra depends on the sphincter length linearly. Human urethras display the same dependences as the urethras of pig, dog, sheep and calf. Quantitatively, however, sow urethras resemble best the human ones. For the human urethras, the mean wall pressure corresponds to (-12.6 ± 0.9) cmH2O and (-8.7 ± 1.1) cmH2O, the rim length to (3.0 ± 0.3) mm and (5.1 ± 0.3) mm and the rim force to (60 ± 20) mN and (100 ± 20) mN for urethra opening and closing, respectively. Assuming an intravesical pressure of 40 cmH2O, and an external pressure on the urethra of 60 cmH2O, the model leads to the optimized sphincter length of (17.3 ± 3.8) mm.
Model-driven optimization of multicomponent self-assembly processes
Korevaar, Peter A.; Grenier, Christophe; Markvoort, Albert J.; Schenning, Albertus P. H. J.; de Greef, Tom F. A.; Meijer, E. W.
2013-01-01
Here, we report an engineering approach toward multicomponent self-assembly processes by developing a methodology to circumvent spurious, metastable assemblies. The formation of metastable aggregates often hampers self-assembly of molecular building blocks into the desired nanostructures. Strategies are explored to master the pathway complexity and avoid off-pathway aggregates by optimizing the rate of assembly along the correct pathway. We study as a model system the coassembly of two monomers, the R- and S-chiral enantiomers of a π-conjugated oligo(p-phenylene vinylene) derivative. Coassembly kinetics are analyzed by developing a kinetic model, which reveals the initial assembly of metastable structures buffering free monomers and thereby slows the formation of thermodynamically stable assemblies. These metastable assemblies exert greater influence on the thermodynamically favored self-assembly pathway if the ratio between both monomers approaches 1:1, in agreement with experimental results. Moreover, competition by metastable assemblies is highly temperature dependent and hampers the assembly of equilibrium nanostructures most effectively at intermediate temperatures. We demonstrate that the rate of the assembly process may be optimized by tuning the cooling rate. Finally, it is shown by simulation that increasing the driving force for assembly stepwise by changing the solvent composition may circumvent metastable pathways and thereby force the assembly process directly into the correct pathway. PMID:24101463
Optimizing Crawler4j using MapReduce Programming Model
NASA Astrophysics Data System (ADS)
Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.
2017-06-01
World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.
Modeling and multidimensional optimization of a tapered free electron laser
NASA Astrophysics Data System (ADS)
Jiao, Y.; Wu, J.; Cai, Y.; Chao, A. W.; Fawley, W. M.; Frisch, J.; Huang, Z.; Nuhn, H.-D.; Pellegrini, C.; Reiche, S.
2012-05-01
Energy extraction efficiency of a free electron laser (FEL) can be greatly increased using a tapered undulator and self-seeding. However, the extraction rate is limited by various effects that eventually lead to saturation of the peak intensity and power. To better understand these effects, we develop a model extending the Kroll-Morton-Rosenbluth, one-dimensional theory to include the physics of diffraction, optical guiding, and radially resolved particle trapping. The predictions of the model agree well with that of the GENESIS single-frequency numerical simulations. In particular, we discuss the evolution of the electron-radiation interaction along the tapered undulator and show that the decreasing of refractive guiding is the major cause of the efficiency reduction, particle detrapping, and then saturation of the radiation power. With this understanding, we develop a multidimensional optimization scheme based on GENESIS simulations to increase the energy extraction efficiency via an improved taper profile and variation in electron beam radius. We present optimization results for hard x-ray tapered FELs, and the dependence of the maximum extractable radiation power on various parameters of the initial electron beam, radiation field, and the undulator system. We also study the effect of the sideband growth in a tapered FEL. Such growth induces increased particle detrapping and thus decreased refractive guiding that together strongly limit the overall energy extraction efficiency.
Optimizing Crawler4j using MapReduce Programming Model
NASA Astrophysics Data System (ADS)
Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.
2016-08-01
World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.
Modeling marine surface microplastic transport to assess optimal removal locations
NASA Astrophysics Data System (ADS)
Sherman, Peter; van Sebille, Erik
2016-01-01
Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the impact on ecosystems, using plankton growth as a proxy. The simulations show that the optimal removal locations are primarily located off the coast of China and in the Indonesian Archipelago for both scenarios. Our estimates show that 31% of the modeled microplastic mass can be removed by 2025 using 29 plastic collectors operating at a 45% capture efficiency from these locations, compared to only 17% when the 29 plastic collectors are moored in the North Pacific garbage patch, between Hawaii and California. The overlap of ocean surface microplastics and phytoplankton growth can be reduced by 46% at our proposed locations, while sinks in the North Pacific can only reduce the overlap by 14%. These results are an indication that oceanic plastic removal might be more effective in removing a greater microplastic mass and in reducing potential harm to marine life when closer to shore than inside the plastic accumulation zones in the centers of the gyres.
Optimal spatiotemporal reduced order modeling for nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
LaBryer, Allen
Proposed in this dissertation is a novel reduced order modeling (ROM) framework called optimal spatiotemporal reduced order modeling (OPSTROM) for nonlinear dynamical systems. The OPSTROM approach is a data-driven methodology for the synthesis of multiscale reduced order models (ROMs) which can be used to enhance the efficiency and reliability of under-resolved simulations for nonlinear dynamical systems. In the context of nonlinear continuum dynamics, the OPSTROM approach relies on the concept of embedding subgrid-scale models into the governing equations in order to account for the effects due to unresolved spatial and temporal scales. Traditional ROMs neglect these effects, whereas most other multiscale ROMs account for these effects in ways that are inconsistent with the underlying spatiotemporal statistical structure of the nonlinear dynamical system. The OPSTROM framework presented in this dissertation begins with a general system of partial differential equations, which are modified for an under-resolved simulation in space and time with an arbitrary discretization scheme. Basic filtering concepts are used to demonstrate the manner in which residual terms, representing subgrid-scale dynamics, arise with a coarse computational grid. Models for these residual terms are then developed by accounting for the underlying spatiotemporal statistical structure in a consistent manner. These subgrid-scale models are designed to provide closure by accounting for the dynamic interactions between spatiotemporal macroscales and microscales which are otherwise neglected in a ROM. For a given resolution, the predictions obtained with the modified system of equations are optimal (in a mean-square sense) as the subgrid-scale models are based upon principles of mean-square error minimization, conditional expectations and stochastic estimation. Methods are suggested for efficient model construction, appraisal, error measure, and implementation with a couple of well-known time
Use of advanced modeling techniques to optimize thermal packaging designs.
Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar
2010-01-01
Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed
Optimal Reservoir Operation using Stochastic Model Predictive Control
NASA Astrophysics Data System (ADS)
Sahu, R.; McLaughlin, D.
2016-12-01
Hydropower operations are typically designed to fulfill contracts negotiated with consumers who need reliable energy supplies, despite uncertainties in reservoir inflows. In addition to providing reliable power the reservoir operator needs to take into account environmental factors such as downstream flooding or compliance with minimum flow requirements. From a dynamical systems perspective, the reservoir operating strategy must cope with conflicting objectives in the presence of random disturbances. In order to achieve optimal performance, the reservoir system needs to continually adapt to disturbances in real time. Model Predictive Control (MPC) is a real-time control technique that adapts by deriving the reservoir release at each decision time from the current state of the system. Here an ensemble-based version of MPC (SMPC) is applied to a generic reservoir to determine both the optimal power contract, considering future inflow uncertainty, and a real-time operating strategy that attempts to satisfy the contract. Contract selection and real-time operation are coupled in an optimization framework that also defines a Pareto trade off between the revenue generated from energy production and the environmental damage resulting from uncontrolled reservoir spills. Further insight is provided by a sensitivity analysis of key parameters specified in the SMPC technique. The results demonstrate that SMPC is suitable for multi-objective planning and associated real-time operation of a wide range of hydropower reservoir systems.
Constrained Multiobjective Optimization Algorithm Based on Immune System Model.
Qian, Shuqu; Ye, Yongqiang; Jiang, Bin; Wang, Jianhong
2016-09-01
An immune optimization algorithm, based on the model of biological immune system, is proposed to solve multiobjective optimization problems with multimodal nonlinear constraints. First, the initial population is divided into feasible nondominated population and infeasible/dominated population. The feasible nondominated individuals focus on exploring the nondominated front through clone and hypermutation based on a proposed affinity design approach, while the infeasible/dominated individuals are exploited and improved via the simulated binary crossover and polynomial mutation operations. And then, to accelerate the convergence of the proposed algorithm, a transformation technique is applied to the combined population of the above two offspring populations. Finally, a crowded-comparison strategy is used to create the next generation population. In numerical experiments, a series of benchmark constrained multiobjective optimization problems are considered to evaluate the performance of the proposed algorithm and it is also compared to several state-of-art algorithms in terms of the inverted generational distance and hypervolume indicators. The results indicate that the new method achieves competitive performance and even statistically significant better results than previous algorithms do on most of the benchmark suite.
Optimization of Forward Wave Modeling on Contemporary HPC Architectures
Krueger, Jens; Micikevicius, Paulius; Williams, Samuel
2012-07-20
Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization for TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.
Optimizing cardiovascular benefits of exercise: a review of rodent models.
Davis, Brittany; Moriguchi, Takeshi; Sumpio, Bauer
2013-03-01
Although research unanimously maintains that exercise can ward off cardiovascular disease (CVD), the optimal type, duration, intensity, and combination of forms are yet not clear. In our review of existing rodent-based studies on exercise and cardiovascular health, we attempt to find the optimal forms, intensities, and durations of exercise. Using Scopus and Medline, a literature review of English language comparative journal studies of cardiovascular benefits and exercise was performed. This review examines the existing literature on rodent models of aerobic, anaerobic, and power exercise and compares the benefits of various training forms, intensities, and durations. The rodent studies reviewed in this article correlate with reports on human subjects that suggest regular aerobic exercise can improve cardiac and vascular structure and function, as well as lipid profiles, and reduce the risk of CVD. Findings demonstrate an abundance of rodent-based aerobic studies, but a lack of anaerobic and power forms of exercise, as well as comparisons of these three components of exercise. Thus, further studies must be conducted to determine a truly optimal regimen for cardiovascular health.
Multi-level systems modeling and optimization for novel aircraft
NASA Astrophysics Data System (ADS)
Subramanian, Shreyas Vathul
This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission
Optimization of Glioblastoma Mouse Orthotopic Xenograft Models for Translational Research.
Irtenkauf, Susan M; Sobiechowski, Susan; Hasselbach, Laura A; Nelson, Kevin K; Transou, Andrea D; Carlton, Enoch T; Mikkelsen, Tom; deCarvalho, Ana C
2017-08-01
Glioblastoma is an aggressive primary brain tumor predominantly localized to the cerebral cortex. We developed a panel of patient-derived mouse orthotopic xenografts (PDOX) for preclinical drug studies by implanting cancer stem cells (CSC) cultured from fresh surgical specimens intracranially into 8-wk-old female athymic nude mice. Here we optimize the glioblastoma PDOX model by assessing the effect of implantation location on tumor growth, survival, and histologic characteristics. To trace the distribution of intracranial injections, toluidine blue dye was injected at 4 locations with defined mediolateral, anterioposterior, and dorsoventral coordinates within the cerebral cortex. Glioblastoma CSC from 4 patients and a glioblastoma nonstem-cell line were then implanted by using the same coordinates for evaluation of tumor location, growth rate, and morphologic and histologic features. Dye injections into one of the defined locations resulted in dye dissemination throughout the ventricles, whereas tumor cell implantation at the same location resulted in a much higher percentage of small multifocal ventricular tumors than did the other 3 locations tested. Ventricular tumors were associated with a lower tumor growth rate, as measured by in vivo bioluminescence imaging, and decreased survival in 4 of 5 cell lines. In addition, tissue oxygenation, vasculature, and the expression of astrocytic markers were altered in ventricular tumors compared with nonventricular tumors. Based on this information, we identified an optimal implantation location that avoided the ventricles and favored cortical tumor growth. To assess the effects of stress from oral drug administration, mice that underwent daily gavage were compared with stress-positive and -negative control groups. Oral gavage procedures did not significantly affect the survival of the implanted mice or physiologic measurements of stress. Our findings document the importance of optimization of the implantation site for
Tool Steel Heat Treatment Optimization Using Neural Network Modeling
NASA Astrophysics Data System (ADS)
Podgornik, Bojan; Belič, Igor; Leskovšek, Vojteh; Godec, Matjaz
2016-11-01
Optimization of tool steel properties and corresponding heat treatment is mainly based on trial and error approach, which requires tremendous experimental work and resources. Therefore, there is a huge need for tools allowing prediction of mechanical properties of tool steels as a function of composition and heat treatment process variables. The aim of the present work was to explore the potential and possibilities of artificial neural network-based modeling to select and optimize vacuum heat treatment conditions depending on the hot work tool steel composition and required properties. In the current case training of the feedforward neural network with error backpropagation training scheme and four layers of neurons (8-20-20-2) scheme was based on the experimentally obtained tempering diagrams for ten different hot work tool steel compositions and at least two austenitizing temperatures. Results show that this type of modeling can be successfully used for detailed and multifunctional analysis of different influential parameters as well as to optimize heat treatment process of hot work tool steels depending on the composition. In terms of composition, V was found as the most beneficial alloying element increasing hardness and fracture toughness of hot work tool steel; Si, Mn, and Cr increase hardness but lead to reduced fracture toughness, while Mo has the opposite effect. Optimum concentration providing high KIc/HRC ratios would include 0.75 pct Si, 0.4 pct Mn, 5.1 pct Cr, 1.5 pct Mo, and 0.5 pct V, with the optimum heat treatment performed at lower austenitizing and intermediate tempering temperatures.
Modeling and optimization of energy storage system for microgrid
NASA Astrophysics Data System (ADS)
Qiu, Xin
The vanadium redox flow battery (VRB) is well suited for the applications of microgrid and renewable energy. This thesis will have a practical analysis of the battery itself and its application in microgrid systems. The first paper analyzes the VRB use in a microgrid system. The first part of the paper develops a reduced order circuit model of the VRB and analyzes its experimental performance efficiency during deployment. The statistical methods and neural network approximation are used to estimate the system parameters. The second part of the paper addresses the implementation issues of the VRB application in a photovoltaic-based microgrid system. A new dc-dc converter was proposed to provide improved charging performance. The paper was published on IEEE Transactions on Smart Grid, Vol. 5, No. 4, July 2014. The second paper studies VRB use within a microgrid system from a practical perspective. A reduced order circuit model of the VRB is introduced that includes the losses from the balance of plant including system and environmental controls. The proposed model includes the circulation pumps and the HVAC system that regulates the environment of the VRB enclosure. In this paper, the VRB model is extended to include the ESS environmental controls to provide a model that provides a more realistic efficiency profile. The paper was submitted to IEEE Transactions on Sustainable Energy. Third paper discussed the optimal control strategy when VRB works with other type of battery in a microgird system. The work in first paper is extended. A high level control strategy is developed to coordinate a lead acid battery and a VRB with reinforcement learning. The paper is to be submitted to IEEE Transactions on Smart Grid.
Optimizing nanomedicine pharmacokinetics using physiologically based pharmacokinetics modelling
Moss, Darren Michael; Siccardi, Marco
2014-01-01
The delivery of therapeutic agents is characterized by numerous challenges including poor absorption, low penetration in target tissues and non-specific dissemination in organs, leading to toxicity or poor drug exposure. Several nanomedicine strategies have emerged as an advanced approach to enhance drug delivery and improve the treatment of several diseases. Numerous processes mediate the pharmacokinetics of nanoformulations, with the absorption, distribution, metabolism and elimination (ADME) being poorly understood and often differing substantially from traditional formulations. Understanding how nanoformulation composition and physicochemical properties influence drug distribution in the human body is of central importance when developing future treatment strategies. A helpful pharmacological tool to simulate the distribution of nanoformulations is represented by physiologically based pharmacokinetics (PBPK) modelling, which integrates system data describing a population of interest with drug/nanoparticle in vitro data through a mathematical description of ADME. The application of PBPK models for nanomedicine is in its infancy and characterized by several challenges. The integration of property–distribution relationships in PBPK models may benefit nanomedicine research, giving opportunities for innovative development of nanotechnologies. PBPK modelling has the potential to improve our understanding of the mechanisms underpinning nanoformulation disposition and allow for more rapid and accurate determination of their kinetics. This review provides an overview of the current knowledge of nanomedicine distribution and the use of PBPK modelling in the characterization of nanoformulations with optimal pharmacokinetics. Linked Articles This article is part of a themed section on Nanomedicine. To view the other articles in this section visit http://dx.doi.org/10.1111/bph.2014.171.issue-17 PMID:24467481
Optimal hemodynamic response model for functional near-infrared spectroscopy
Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668
A canopy-type similarity model for wind farm optimization
NASA Astrophysics Data System (ADS)
Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando
2013-04-01
The atmospheric boundary layer (ABL) flow through and over wind farms has been found to be similar to canopy-type flows, with characteristic flow development and shear penetration length scales (Markfort et al., 2012). Wind farms capture momentum from the ABL both at the leading edge and from above. We examine this further with an analytical canopy-type model. Within the flow development region, momentum is advected into the wind farm and wake turbulence draws excess momentum in from between turbines. This spatial heterogeneity of momentum within the wind farm is characterized by large dispersive momentum fluxes. Once the flow within the farm is developed, the area-averaged velocity profile exhibits a characteristic inflection point near the top of the wind farm, similar to that of canopy-type flows. The inflected velocity profile is associated with the presence of a dominant characteristic turbulence scale, which may be responsible for a significant portion of the vertical momentum flux. Prediction of this scale is useful for determining the amount of available power for harvesting. The new model is tested with results from wind tunnel experiments, which were conducted to characterize the turbulent flow in and above model wind farms in aligned and staggered configurations. The model is useful for representing wind farms in regional scale models, for the optimization of wind farms considering wind turbine spacing and layout configuration, and for assessing the impacts of upwind wind farms on nearby wind resources. Markfort CD, W Zhang and F Porté-Agel. 2012. Turbulent flow and scalar transport through and over aligned and staggered wind farms. Journal of Turbulence. 13(1) N33: 1-36. doi:10.1080/14685248.2012.709635.
Pulsed pumping process optimization using a potential flow model.
Tenney, C M; Lastoskie, C M
2007-08-15
A computational model is applied to the optimization of pulsed pumping systems for efficient in situ remediation of groundwater contaminants. In the pulsed pumping mode of operation, periodic rather than continuous pumping is used. During the pump-off or trapping phase, natural gradient flow transports contaminated groundwater into a treatment zone surrounding a line of injection and extraction wells that transect the contaminant plume. Prior to breakthrough of the contaminated water from the treatment zone, the wells are activated and the pump-on or treatment phase ensues, wherein extracted water is augmented to stimulate pollutant degradation and recirculated for a sufficient period of time to achieve mandated levels of contaminant removal. An important design consideration in pulsed pumping groundwater remediation systems is the pumping schedule adopted to best minimize operational costs for the well grid while still satisfying treatment requirements. Using an analytic two-dimensional potential flow model, optimal pumping frequencies and pumping event durations have been investigated for a set of model aquifer-well systems with different well spacings and well-line lengths, and varying aquifer physical properties. The results for homogeneous systems with greater than five wells and moderate to high pumping rates are reduced to a single, dimensionless correlation. Results for heterogeneous systems are presented graphically in terms of dimensionless parameters to serve as an efficient tool for initial design and selection of the pumping regimen best suited for pulsed pumping operation for a particular well configuration and extraction rate. In the absence of significant retardation or degradation during the pump-off phase, average pumping rates for pulsed operation were found to be greater than the continuous pumping rate required to prevent contaminant breakthrough.
NASA Technical Reports Server (NTRS)
Sherry, Lance; Ferguson, John; Hoffman, Karla; Donohue, George; Beradino, Frank
2012-01-01
This report describes the Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM) that is designed to provide insights into airline decision-making with regards to markets served, schedule of flights on these markets, the type of aircraft assigned to each scheduled flight, load factors, airfares, and airline profits. The main inputs to the model are hedged fuel prices, airport capacity limits, and candidate markets. Embedded in the model are aircraft performance and associated cost factors, and willingness-to-pay (i.e. demand vs. airfare curves). Case studies demonstrate the application of the model for analysis of the effects of increased capacity and changes in operating costs (e.g. fuel prices). Although there are differences between airports (due to differences in the magnitude of travel demand and sensitivity to airfare), the system is more sensitive to changes in fuel prices than capacity. Further, the benefits of modernization in the form of increased capacity could be undermined by increases in hedged fuel prices
Modeling digital breast tomosynthesis imaging systems for optimization studies
NASA Astrophysics Data System (ADS)
Lau, Beverly Amy
Digital breast tomosynthesis (DBT) is a new imaging modality for breast imaging. In tomosynthesis, multiple images of the compressed breast are acquired at different angles, and the projection view images are reconstructed to yield images of slices through the breast. One of the main problems to be addressed in the development of DBT is the optimal parameter settings to obtain images ideal for detection of cancer. Since it would be unethical to irradiate women multiple times to explore potentially optimum geometries for tomosynthesis, it is ideal to use a computer simulation to generate projection images. Existing tomosynthesis models have modeled scatter and detector without accounting for oblique angles of incidence that tomosynthesis introduces. Moreover, these models frequently use geometry-specific physical factors measured from real systems, which severely limits the robustness of their algorithms for optimization. The goal of this dissertation was to design the framework for a computer simulation of tomosynthesis that would produce images that are sensitive to changes in acquisition parameters, so an optimization study would be feasible. A computer physics simulation of the tomosynthesis system was developed. The x-ray source was modeled as a polychromatic spectrum based on published spectral data, and inverse-square law was applied. Scatter was applied using a convolution method with angle-dependent scatter point spread functions (sPSFs), followed by scaling using an angle-dependent scatter-to-primary ratio (SPR). Monte Carlo simulations were used to generate sPSFs for a 5-cm breast with a 1-cm air gap. Detector effects were included through geometric propagation of the image onto layers of the detector, which were blurred using depth-dependent detector point-spread functions (PRFs). Depth-dependent PRFs were calculated every 5-microns through a 200-micron thick CsI detector using Monte Carlo simulations. Electronic noise was added as Gaussian noise as a
NASA Astrophysics Data System (ADS)
Tsujimoto, Kumiko; Homma, Koki; Koike, Toshio; Ohta, Tetsu
2013-04-01
A coupled model of a distributed hydrological model and a rice growth model was developed in this study. The distributed hydrological model used in this study is the Water and Energy Budget-based Distributed Hydrological Model (WEB-DHM) developed by Wang et al. (2009). This model includes a modified SiB2 (Simple Biosphere Model, Sellers et al., 1996) and the Geomorphology-Based Hydrological Model (GBHM) and thus it can physically calculate both water and energy fluxes. The rice growth model used in this study is the Simulation Model for Rice-Weather relations (SIMRIW) - rainfed developed by Homma et al. (2009). This is an updated version of the original SIMRIW (Horie et al., 1987) and can calculate rice growth by considering the yield reduction due to water stress. The purpose of the coupling is the integration of hydrology and crop science to develop a tool to support decision making 1) for determining the necessary agricultural water resources and 2) for allocating limited water resources to various sectors. The efficient water use and optimal water allocation in the agricultural sector are necessary to balance supply and demand of limited water resources. In addition, variations in available soil moisture are the main reasons of variations in rice yield. In our model, soil moisture and the Leaf Area Index (LAI) are calculated inside SIMRIW-rainfed so that these variables can be simulated dynamically and more precisely based on the rice than the more general calculations is the original WEB-DHM. At the same time by coupling SIMRIW-rainfed with WEB-DHM, lateral flow of soil water, increases in soil moisture and reduction of river discharge due to the irrigation, and its effects on the rice growth can be calculated. Agricultural information such as planting date, rice cultivar, fertilization amount are given in a fully distributed manner. The coupled model was validated using LAI and soil moisture in a small basin in western Cambodia (Sangker River Basin). This
Optimization modeling to maximize population access to comprehensive stroke centers
Branas, Charles C.; Kasner, Scott E.; Wolff, Catherine; Williams, Justin C.; Albright, Karen C.; Carr, Brendan G.
2015-01-01
Objective: The location of comprehensive stroke centers (CSCs) is critical to ensuring rapid access to acute stroke therapies; we conducted a population-level virtual trial simulating change in access to CSCs using optimization modeling to selectively convert primary stroke centers (PSCs) to CSCs. Methods: Up to 20 certified PSCs per state were selected for conversion to maximize the population with 60-minute CSC access by ground and air. Access was compared across states based on region and the presence of state-level emergency medical service policies preferentially routing patients to stroke centers. Results: In 2010, there were 811 Joint Commission PSCs and 0 CSCs in the United States. Of the US population, 65.8% had 60-minute ground access to PSCs. After adding up to 20 optimally located CSCs per state, 63.1% of the US population had 60-minute ground access and 86.0% had 60-minute ground/air access to a CSC. Across states, median CSC access was 55.7% by ground (interquartile range 35.7%–71.5%) and 85.3% by ground/air (interquartile range 59.8%–92.1%). Ground access was lower in Stroke Belt states compared with non–Stroke Belt states (32.0% vs 58.6%, p = 0.02) and lower in states without emergency medical service routing policies (52.7% vs 68.3%, p = 0.04). Conclusion: Optimal system simulation can be used to develop efficient care systems that maximize accessibility. Under optimal conditions, a large proportion of the US population will be unable to access a CSC within 60 minutes. PMID:25740858
Optimal control of suspended sediment distribution model of Talaga lake
NASA Astrophysics Data System (ADS)
Ratianingsih, R.; Resnawati, Azim, Mardlijah, Widodo, B.
2017-08-01
Talaga Lake is one of several lakes in Central Sulawesi that potentially to be managed in multi purposes scheme because of its characteristic. The scheme is addressed not only due to the lake maintenance because of its sediment but also due to the Algae farming for its biodiesel fuel. This paper governs a suspended sediment distribution model of Talaga lake. The model is derived from the two dimensional hydrodynamic shallow water equations of the mass and momentum conservation law of sediment transport. An order reduction of the model gives six equations of hyperbolic systems of the depth, two dimension directional velocities and sediment concentration while the bed elevation as the second order of turbulent diffusion and dispersion are neglected. The system is discreted and linearized such that could be solved numerically by box-Keller method for some initial and boundary condition. The solutions shows that the downstream velocity is play a role in transversal direction of stream function flow. The downstream accumulated sediment indicate that the suspended sediment and its changing should be controlled by optimizing the downstream velocity and transversal suspended sediment changing due to the ideal algae growth need.
Software Piracy Detection Model Using Ant Colony Optimization Algorithm
NASA Astrophysics Data System (ADS)
Astiqah Omar, Nor; Zakuan, Zeti Zuryani Mohd; Saian, Rizauddin
2017-06-01
Internet enables information to be accessible anytime and anywhere. This scenario creates an environment whereby information can be easily copied. Easy access to the internet is one of the factors which contribute towards piracy in Malaysia as well as the rest of the world. According to a survey conducted by Compliance Gap BSA Global Software Survey in 2013 on software piracy, found out that 43 percent of the software installed on PCs around the world was not properly licensed, the commercial value of the unlicensed installations worldwide was reported to be 62.7 billion. Piracy can happen anywhere including universities. Malaysia as well as other countries in the world is faced with issues of piracy committed by the students in universities. Piracy in universities concern about acts of stealing intellectual property. It can be in the form of software piracy, music piracy, movies piracy and piracy of intellectual materials such as books, articles and journals. This scenario affected the owner of intellectual property as their property is in jeopardy. This study has developed a classification model for detecting software piracy. The model was developed using a swarm intelligence algorithm called the Ant Colony Optimization algorithm. The data for training was collected by a study conducted in Universiti Teknologi MARA (Perlis). Experimental results show that the model detection accuracy rate is better as compared to J48 algorithm.
Modeling the minimum enzymatic requirements for optimal cellulose conversion
NASA Astrophysics Data System (ADS)
den Haan, R.; van Zyl, J. M.; Harms, T. M.; van Zyl, W. H.
2013-06-01
Hydrolysis of cellulose is achieved by the synergistic action of endoglucanases, exoglucanases and β-glucosidases. Most cellulolytic microorganisms produce a varied array of these enzymes and the relative roles of the components are not easily defined or quantified. In this study we have used partially purified cellulases produced heterologously in the yeast Saccharomyces cerevisiae to increase our understanding of the roles of some of these components. CBH1 (Cel7), CBH2 (Cel6) and EG2 (Cel5) were separately produced in recombinant yeast strains, allowing their isolation free of any contaminating cellulolytic activity. Binary and ternary mixtures of the enzymes at loadings ranging between 3 and 100 mg g-1 Avicel allowed us to illustrate the relative roles of the enzymes and their levels of synergy. A mathematical model was created to simulate the interactions of these enzymes on crystalline cellulose, under both isolated and synergistic conditions. Laboratory results from the various mixtures at a range of loadings of recombinant enzymes allowed refinement of the mathematical model. The model can further be used to predict the optimal synergistic mixes of the enzymes. This information can subsequently be applied to help to determine the minimum protein requirement for complete hydrolysis of cellulose. Such knowledge will be greatly informative for the design of better enzymatic cocktails or processing organisms for the conversion of cellulosic biomass to commodity products.
Multiscale modelling of hydrothermal biomass pretreatment for chip size optimization.
Hosseini, Seyed Ali; Shah, Nilay
2009-05-01
The objective of this work is to develop a relationship between biomass chip size and the energy requirement of hydrothermal pretreatment processes using a multiscale modelling approach. The severity factor or modified severity factor is currently used to characterize some hydrothermal pretreatment methods. Although these factors enable an easy comparison of experimental results to facilitate process design and operation, they are not representative of all the factors affecting the efficiency of pretreatment, because processes with the same temperature, residence time, and pH will not have same effect on biomass chips of different size. In our study, a model based on the diffusion of liquid or steam in the biomass that takes into account the interrelationship between chip size and time is developed. With the aid of our developed model, a method to find the optimum chip size that minimizes the energy requirement of grinding and pretreatment processes is proposed. We show that with the proposed optimization method, an average saving equivalent to a 5% improvement in the yield of biomass to ethanol conversion process can be achieved.
Results of Satellite Brightness Modeling Using Kringing Optimized Interpolation
NASA Astrophysics Data System (ADS)
Weeden, C.; Hejduk, M.
At the 2005 AMOS conference, Kriging Optimized Interpolation (KOI) was presented as a tool to model satellite brightness as a function of phase angle and solar declination angle (J.M Okada and M.D. Hejduk). Since November 2005, this method has been used to support the tasking algorithm for all optical sensors in the Space Surveillance Network (SSN). The satellite brightness maps generated by the KOI program are compared to each sensor's ability to detect an object as a function of the brightness of the background sky and angular rate of the object. This will determine if the sensor can technically detect an object based on an explicit calculation of the object's probability of detection. In addition, recent upgrades at Ground-Based Electro Optical Deep Space Surveillance Sites (GEODSS) sites have increased the amount and quality of brightness data collected and therefore available for analysis. This in turn has provided enough data to study the modeling process in more detail in order to obtain the most accurate brightness prediction of satellites. Analysis of two years of brightness data gathered from optical sensors and modeled via KOI solutions are outlined in this paper. By comparison, geo-stationary objects (GEO) were tracked less than non-GEO objects but had higher density tracking in phase angle due to artifices of scheduling. A statistically-significant fit to a deterministic model was possible less than half the time in both GEO and non-GEO tracks, showing that a stochastic model must often be used alone to produce brightness results, but such results are nonetheless serviceable. Within the Kriging solution, the exponential variogram model was the most frequently employed in both GEO and non-GEO tracks, indicating that monotonic brightness variation with both phase and solar declination angle is common and testifying to the suitability to the application of regionalized variable theory to this particular problem. Finally, the average nugget value, or
Li, X; Zhang, X; Wang, Y; Wu, Y
2000-12-01
On the basis of analyzing the distribution feature of water resource and the canal water utilization coefficient of oases in southern margin of Taklamakan Desert, observing the wind prevention efficiency of shelterbelt through a simulation experiment in wind tunnel, and 15 years researching the comprehensive control of desertified land in Cele Oasis, a series of optimal models on sustainable management of oases ecosystem is southern margin of Taklamakan Desert were proposed i.e., the optimal model on "moderated osais", the optimal model on structure of wind-breaks, the optimal model on comprehensive control of desertified land, and the optimal model on planting structure of corps.
An optimization model to agroindustrial sector in antioquia (Colombia, South America)
NASA Astrophysics Data System (ADS)
Fernandez, J.
2015-06-01
This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.
All-in-one model for designing optimal water distribution pipe networks
NASA Astrophysics Data System (ADS)
Aklog, Dagnachew; Hosoi, Yoshihiko
2017-05-01
This paper discusses the development of an easy-to-use, all-in-one model for designing optimal water distribution networks. The model combines different optimization techniques into a single package in which a user can easily choose what optimizer to use and compare the results of different optimizers to gain confidence in the performances of the models. At present, three optimization techniques are included in the model: linear programming (LP), genetic algorithm (GA) and a heuristic one-by-one reduction method (OBORM) that was previously developed by the authors. The optimizers were tested on a number of benchmark problems and performed very well in terms of finding optimal or near-optimal solutions with a reasonable computation effort. The results indicate that the model effectively addresses the issues of complexity and limited performance trust associated with previous models and can thus be used for practical purposes.
Yang, Guoxiang; Best, Elly P H
2015-09-15
Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions.
Simulation and optimization models for emergency medical systems planning.
Bettinelli, Andrea; Cordone, Roberto; Ficarelli, Federico; Righini, Giovanni
2014-01-01
The authors address strategic planning problems for emergency medical systems (EMS). In particular, the three following critical decisions are considered: i) how many ambulances to deploy in a given territory at any given point in time, to meet the forecasted demand, yielding an appropriate response time; ii) when ambulances should be used for serving nonurgent requests and when they should better be kept idle for possible incoming urgent requests; iii) how to define an optimal mix of contracts for renting ambulances from private associations to meet the forecasted demand at minimum cost. In particular, analytical models for decision support, based on queuing theory, discrete-event simulation, and integer linear programming were presented. Computational experiments have been done on real data from the city of Milan, Italy.
Optimized Baxter model of protein solutions: Electrostatics versus adhesion
NASA Astrophysics Data System (ADS)
Prinsen, Peter; Odijk, Theo
2004-10-01
A theory is set up of spherical proteins interacting by screened electrostatics and constant adhesion, in which the effective adhesion parameter is optimized by a variational principle for the free energy. An analytical approach to the second virial coefficient is first outlined by balancing the repulsive electrostatics against part of the bare adhesion. A theory similar in spirit is developed at nonzero concentrations by assuming an appropriate Baxter model as the reference state. The first-order term in a functional expansion of the free energy is set equal to zero which determines the effective adhesion as a function of salt and protein concentrations. The resulting theory is shown to have fairly good predictive power for the ionic-strength dependence of both the second virial coefficient and the osmotic pressure or compressibility of lysozyme up to about 0.2 volume fraction.
Finite Element Modeling and Optimization of Mechanical Joining Technology
NASA Astrophysics Data System (ADS)
Chenot, Jean-Loup; Bouchard, Pierre-Olivier; Massoni, Elisabeth; Mocellin, Katia; Lasne, Patrice
2011-05-01
The main scientific ingredients are recalled for developing a general finite element code and model accurately large plastic deformation of metallic materials during joining processes. Multi material contact is treated using the classical master and slave approach. Rupture may occur in joining processes or even be imposed in self piercing riveting and it must be predicted to evaluate the ultimate strength of joins. Damage is introduced with a generalized uncoupled damage criterion, or by utilizing a coupled formulation with a Lemaître law. Several joining processes are briefly analyzed in term of specific scientific issues: riveting, self piercing riveting, clinching, crimping, hemming and screwing. It is shown that not only the joining process can be successfully simulated and optimized, but also the strength of the assembly can be predicted in tension and in shearing.
Test cell modeling and optimization for FPD-II
Haney, S.W.; Fenstermacher, M.E.
1985-04-10
The Fusion Power Demonstration, Configuration II (FPD-II), will ba a DT burning tandem mirror facility with thermal barriers, designed as the next step engineering test reactor (ETR) to follow the tandem mirror ignition test machines. Current plans call for FPD-II to be a multi-purpose device. For approximately the first half of its lifetime, it will operate as a high-Q ignition machine designed to reach or exceed engineering break-even and to demonstrate the technological feasibility of tandem mirror fusion. The second half of its operation will focus on the evaluation of candidate reactor blanket designs using a neutral beam driven test cell inserted at the midplane of the 90 m long cell. This machine called FPD-II+T, uses an insert configuration similar to that used in the MFTF-..cap alpha..+T study. The modeling and optimization of FPD-II+T are the topic of the present paper.
The spa as a model of an optimal healing environment.
Frost, Gary J
2004-01-01
"Spa" is an acronym for salus per aqua, or health through water. There currently are approximately 10,000 spas of all types in the United States. Most now focus on eating and weight programs with subcategories of sports activities and nutrition most prominent. The main reasons stated by clients for their use are stress reduction, specific medical or other health issues, eating and weight loss, rest and relaxation, fitness and exercise, and pampering and beauty. A detailed description of the Canyon Ranch, a spa facility in Tucson, AZ, is presented as a case study in this paper. It appears that the three most critical factors in creating an optimal healing environment in a spa venue are (1) a dedicated caring staff at all levels, (2) a mission driven organization that will not compromise, and (3) a sound business model and leadership that will ensure permanency.
Approximate Optimal Control as a Model for Motor Learning
ERIC Educational Resources Information Center
Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.
2005-01-01
Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…
Approximate Optimal Control as a Model for Motor Learning
ERIC Educational Resources Information Center
Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.
2005-01-01
Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…
Source term identification in atmospheric modelling via sparse optimization
NASA Astrophysics Data System (ADS)
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the
Multi-model Simulation for Optimal Control of Aeroacoustics.
Collis, Samuel Scott; Chen, Guoquan
2005-05-01
Flow-generated noise, especially rotorcraft noise has been a serious concern for bothcommercial and military applications. A particular important noise source for rotor-craft is Blade-Vortex-Interaction (BVI)noise, a high amplitude, impulsive sound thatoften dominates other rotorcraft noise sources. Usually BVI noise is caused by theunsteady flow changes around various rotor blades due to interactions with vorticespreviously shed by the blades. A promising approach for reducing the BVI noise isto use on-blade controls, such as suction/blowing, micro-flaps/jets, and smart struc-tures. Because the design and implementation of such experiments to evaluate suchsystems are very expensive, efficient computational tools coupled with optimal con-trol systems are required to explore the relevant physics and evaluate the feasibilityof using various micro-fluidic devices before committing to hardware.In this thesis the research is to formulate and implement efficient computationaltools for the development and study of optimal control and design strategies for com-plex flow and acoustic systems with emphasis on rotorcraft applications, especiallyBVI noise control problem. The main purpose of aeroacoustic computations is todetermine the sound intensity and directivity far away from the noise source. How-ever, the computational cost of using a high-fidelity flow-physics model across thefull domain is usually prohibitive and itmight also be less accurate because of thenumerical diffusion and other problems. Taking advantage of the multi-physics andmulti-scale structure of this aeroacoustic problem, we develop a multi-model, multi-domain (near-field/far-field) method based on a discontinuous Galerkin discretiza-tion. In this approach the coupling of multi-domains and multi-models is achievedby weakly enforcing continuity of normal fluxes across a coupling surface. For ourinterested aeroacoustics control problem, the adjoint equations that determine thesensitivity of the cost
Optimal Control of Distributed Energy Resources using Model Predictive Control
Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen
2012-07-22
In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.
Managing and learning with multiple models: Objectives and optimization algorithms
Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.
2011-01-01
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.
Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.
2005-09-01
Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.
NASA Technical Reports Server (NTRS)
Zook, H. A.
1985-01-01
A prediction of the future population of satellites, satellite fragments, and assorted spacecraft debris in Earth orbit can be reliably made only after three conditions are satisfied: (1) the size and spatial distributions of these Earth-orbiting objects are established at some present-day time; (2) the processes of orbital evolution, explosions, hypervelocity impact fragmentation, and atmospheric drag are understood; and (3) a reasonable traffic model for the future launch rate of Earth-orbiting objects is assumed. The theoretician will then take these three quantities as input data and will carry through the necessary mathematica and numerical analyses to project the present-day orbital population into the future.
Modeling, design, and optimization of Mindwalker series elastic joint.
Wang, Shiqian; Meijneke, Cor; van der Kooij, Herman
2013-06-01
Weight and power autonomy are limiting the daily use of wearable exoskeleton. Lightweight, efficient and powerful actuation system are not easy to achieve. Choosing the right combinations of existing technologies, such as battery, gear and motor is not a trivial task. In this paper, we propose an optimization framework by setting up a power-based quasi-static model of the exoskeleton joint drivetrain. The goal is to find the most efficient and lightweight combinations. This framework can be generalized for other similar applications by extending or accommodating the model to their own needs. We also present the Mindwalker exoskeleton joint, for which a novel series elastic actuator, consisting of a ballscrew-driven linear actuator and a double spiral spring, was developed and tested. This linear actuator is capable of outputting 960 W power and the exoskeleton joint can output 100 Nm peak torque continuously. The double spiral spring can sense torque between 0.08Nm and 100 Nm and it exhibits linearity of 99.99%, with no backlash or hysteresis. The series elastic joint can track a chirp torque profile with amplitude of 100 Nm over 6 Hz (large torque bandwidth) and for small torque (2 Nm peak-to-peak), it has a bandwidth over 38 Hz. The integrated exoskeleton joint, including the ballscrew-driven linear actuator, the series spring, electronics and the metal housing which hosts these components, weighs 2.9 kg.
In silico strain optimization by adding reactions to metabolic models.
Correia, Sara; Rocha, Miguel
2012-07-24
Nowadays, the concerns about the environment and the needs to increase the productivity at low costs, demand for the search of new ways to produce compounds with industrial interest. Based on the increasing knowledge of biological processes, through genome sequencing projects, and high-throughput experimental techniques as well as the available computational tools, the use of microorganisms has been considered as an approach to produce desirable compounds. However, this usually requires to manipulate these organisms by genetic engineering and/ or changing the enviromental conditions to make the production of these compounds possible. In many cases, it is necessary to enrich the genetic material of those microbes with hereologous pathways from other species and consequently adding the potential to produce novel compounds. This paper introduces a new plug-in for the OptFlux Metabolic Engineering platform, aimed at finding suitable sets of reactions to add to the genomes of selected microbes (wild type strain), as well as finding complementary sets of deletions, so that the mutant becomes able to overproduce compounds with industrial interest, while preserving their viability. The necessity of adding reactions to the metabolic model arises from existing gaps in the original model or motivated by the productions of new compounds by the organism. The optimization methods used are metaheuristics such as Evolutionary Algorithms and Simulated Annealing. The usefulness of this plug-in is demonstrated by a case study, regarding the production of vanillin by the bacterium E. coli.
Modelling and Optimization of Copper Electroplating Adhesion Strength
NASA Astrophysics Data System (ADS)
Suryanto; Haider, Farag I.; Hanafi Ani, Mohd; Mahmood, M. H.
2017-05-01
In this paper, Response surface methodology (RSM) was utilized to design the experiments at the settings of CuSO4 and H2SO4 concentrations and current densities. It also used for modelling and optimize the parameters on the adhesion strength of austenitic stainless steel substrate. The adhesion strength was investigated by the Teer ST-30 tester, and the structure of the samples investigated by using scanning electron microscopy (SEM). The modelling approach adopted in the present investigation can be used to predict the adhesion strength of the copper coatings on stainless steel substrate of electroplating parameters in ranges of CuSO4 100 to 200 g / L, H2SO4 100 to 200 g / L and current density 40 to 80 mA / cm2. The results showed that, operating condition should be controlled at 200 g/L CuSO4, 100 g/L H2SO4 and 80 mA/cm2, to obtain the maximum adhesion strength 10N.
Optimal control of an asymptotic model of flow separation
NASA Astrophysics Data System (ADS)
Qadri, Ubaid; Schmid, Peter; LFC-UK Team
2015-11-01
In the presence of surface imperfections, the boundary layer developing over an aircraft wing can separate and reattach, leading to a small separation bubble. We are interested in developing a low-order model that can be used to control the onset of separation at high Reynolds numbers typical of aircraft flight. In contrast to previous studies, we use a high Reynolds number asymptotic description of the Navier-Stokes equations to describe the motion of motion of the fluid. We obtain a steady solution to the nonlinear triple-deck equations for the separated flow over a small bump at high Reynolds numbers. We derive for the first time the adjoint of the nonlinear triple-deck equations and use it to study optimal control of the separated flow. We calculate the sensitivity of the properties of the separation bubble to local base flow modifications and steady forcing. We assess the validity of using this simplified asymptotic model by comparing our results with those obtained using the full Navier-Stokes equations.
Calibration Modeling Methodology to Optimize Performance for Low Range Applications
NASA Technical Reports Server (NTRS)
McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.
2010-01-01
Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.
Culture optimization for the emergent zooplanktonic model organism Oikopleura dioica
Bouquet, Jean-Marie; Spriet, Endy; Troedsson, Christofer; Otterå, Helen; Chourrout, Daniel; Thompson, Eric M.
2009-01-01
The pan-global marine appendicularian, Oikopleura dioica, shows considerable promise as a candidate model organism for cross-disciplinary research ranging from chordate genetics and evolution to molecular ecology research. This urochordate, has a simplified anatomical organization, remains transparent throughout an exceptionally short life cycle of less than 1 week and exhibits high fecundity. At 70 Mb, the compact, sequenced genome ranks among the smallest known metazoan genomes, with both gene regulatory and intronic regions highly reduced in size. The organism occupies an important trophic role in marine ecosystems and is a significant contributor to global vertical carbon flux. Among the short list of bona fide biological model organisms, all share the property that they are amenable to long-term maintenance in laboratory cultures. Here, we tested diet regimes, spawn densities and dilutions and seawater treatment, leading to optimization of a detailed culture protocol that permits sustainable long-term maintenance of O. dioica, allowing continuous, uninterrupted production of source material for experimentation. The culture protocol can be quickly adapted in both coastal and inland laboratories and should promote rapid development of the many original research perspectives the animal offers. PMID:19461862
3D modeling and optimization of the ITER ICRH antenna
NASA Astrophysics Data System (ADS)
Louche, F.; Dumortier, P.; Durodié, F.; Messiaen, A.; Maggiora, R.; Milanesio, D.
2011-12-01
The prediction of the coupling properties of the ITER ICRH antenna necessitates the accurate evaluation of the resistance and reactance matrices. The latter are mostly dependent on the geometry of the array and therefore a model as accurate as possible is needed to precisely compute these matrices. Furthermore simulations have so far neglected the poloidal and toroidal profile of the plasma, and it is expected that the loading by individual straps will vary significantly due to varying strap-plasma distance. To take this curvature into account, some modifications of the alignment of the straps with respect to the toroidal direction are proposed. It is shown with CST Microwave Studio® [1] that considering two segments in the toroidal direction, i.e. a "V-shaped" toroidal antenna, is sufficient. A new CATIA model including this segmentation has been drawn and imported into both MWS and TOPICA [2] codes. Simulations show a good agreement of the impedance matrices in vacuum. Various modifications of the geometry are proposed in order to further optimize the coupling. In particular we study the effect of the strap box parameters and the recess of the vertical septa.
Using the NOABL flow model and mathematical optimization as a micrositing tool
Wegley, H.L.; Barnard, J.C.
1986-11-01
This report describes the use of an improved mass-consistent model that is intended for diagnosing wind fields in complex terrain. The model was developed by merging an existing mass-consistent model, the NOABL model, with an optimization procedure. The optimization allows objective calculation of important model input parameters that previously had been supplied through guesswork; in this manner, the accuracy of the calculated winds has been greatly increased. The report covers such topics as the software structure of the model, assembling an input file, processing the model's output, and certain cautions about the model's operation. The use of the model is illustrated by a test case.
Essays on Applied Resource Economics Using Bioeconomic Optimization Models
NASA Astrophysics Data System (ADS)
Affuso, Ermanno
With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.
Clean wing airframe noise modeling for multidisciplinary design and optimization
NASA Astrophysics Data System (ADS)
Hosder, Serhat
A new noise metric has been developed that may be used for optimization problems involving aerodynamic noise from a clean wing. The modeling approach uses a classical trailing edge noise theory as the starting point. The final form of the noise metric includes characteristic velocity and length scales that are obtained from three-dimensional, steady, RANS simulations with a two equation k-o turbulence model. The noise metric is not the absolute value of the noise intensity, but an accurate relative noise measure as shown in the validation studies. One of the unique features of the new noise metric is the modeling of the length scale, which is directly related to the turbulent structure of the flow at the trailing edge. The proposed noise metric model has been formulated so that it can capture the effect of different design variables on the clean wing airframe noise such as the aircraft speed, lift coefficient, and wing geometry. It can also capture three dimensional effects which become important at high lift coefficients, since the characteristic velocity and the length scales are allowed to vary along the span of the wing. Noise metric validation was performed with seven test cases that were selected from a two-dimensional NACA 0012 experimental database. The agreement between the experiment and the predictions obtained with the new noise metric was very good at various speeds, angles of attack, and Reynolds Number, which showed that the noise metric is capable of capturing the variations in the trailing edge noise as a relative noise measure when different flow conditions and parameters are changed. Parametric studies were performed to investigate the effect of different design variables on the noise metric. Two-dimensional parametric studies were done using two symmetric NACA four-digit airfoils (NACA 0012 and NACA 0009) and two supercritical (SC(2)-0710 and SC(2)-0714) airfoils. The three-dimensional studies were performed with two versions of a conventional
A model based technique for the design of flight directors. [optimal control models
NASA Technical Reports Server (NTRS)
Levison, W. H.
1973-01-01
A new technique for designing flight directors is discussed. This technique uses the optimal-control pilot/vehicle model to determine the appropriate control strategy. The dynamics of this control strategy are then incorporated into the director control laws, thereby enabling the pilot to operate at a significantly lower workload. A preliminary design of a control director for maintaining a STOL vehicle on the approach path in the presence of random air turbulence is evaluated. By selecting model parameters in terms of allowable path deviations and pilot workload levels, a set of director laws is achieved which allows improved system performance at reduced workload levels. The pilot acts essentially as a proportional controller with regard to the director signals, and control motions are compatible with those appropriate to status-only displays.
Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models
NASA Astrophysics Data System (ADS)
Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.
2012-12-01
The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).
Dynamic Modeling, Model-Based Control, and Optimization of Solid Oxide Fuel Cells
NASA Astrophysics Data System (ADS)
Spivey, Benjamin James
2011-07-01
Solid oxide fuel cells are a promising option for distributed stationary power generation that offers efficiencies ranging from 50% in stand-alone applications to greater than 80% in cogeneration. To advance SOFC technology for widespread market penetration, the SOFC should demonstrate improved cell lifetime and load-following capability. This work seeks to improve lifetime through dynamic analysis of critical lifetime variables and advanced control algorithms that permit load-following while remaining in a safe operating zone based on stress analysis. Control algorithms typically have addressed SOFC lifetime operability objectives using unconstrained, single-input-single-output control algorithms that minimize thermal transients. Existing SOFC controls research has not considered maximum radial thermal gradients or limits on absolute temperatures in the SOFC. In particular, as stress analysis demonstrates, the minimum cell temperature is the primary thermal stress driver in tubular SOFCs. This dissertation presents a dynamic, quasi-two-dimensional model for a high-temperature tubular SOFC combined with ejector and prereformer models. The model captures dynamics of critical thermal stress drivers and is used as the physical plant for closed-loop control simulations. A constrained, MIMO model predictive control algorithm is developed and applied to control the SOFC. Closed-loop control simulation results demonstrate effective load-following, constraint satisfaction for critical lifetime variables, and disturbance rejection. Nonlinear programming is applied to find the optimal SOFC size and steady-state operating conditions to minimize total system costs.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults
NASA Astrophysics Data System (ADS)
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
NASA Astrophysics Data System (ADS)
El Jarbi, M.; Rückelt, J.; Slawig, T.; Oschlies, A.
2013-02-01
This paper presents the application of the Linear Quadratic Optimal Control (LQOC) method to a parameter optimization problem for a one-dimensional marine ecosystem model of NPZD (N for dissolved inorganic nitrogen, P for phytoplankton, Z for zooplankton and D for detritus) type. This ecosystem model, developed by Oschlies and Garcon, simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. The LQOC method is used to introduce annually periodic model parameters in a linearized version of the model. We show that the obtained version of the model gives a significant reduction of the model-data misfit, compared to the one obtained for the original model with optimized constant parameters. The found inner-annual variability of the optimized parameters provides hints for improvement of the original model. We use the obtained optimal periodic parameters also in validation and prediction experiments with the original non-linear version of the model. In both cases, the results are significantly better than those obtained with optimized constant parameters.
Heterogeneous Nuclear Reactor Models for Optimal Xenon Control.
NASA Astrophysics Data System (ADS)
Gondal, Ishtiaq Ahmad
Nuclear reactors are generally modeled as homogeneous mixtures of fuel, control, and other materials while in reality they are heterogeneous-homogeneous configurations comprised of fuel and control rods along with other materials. Similarly, for space-time studies of a nuclear reactor, homogeneous, usually one-group diffusion theory, models are used, and the system equations are solved by either nodal or modal expansion approximations. Study of xenon-induced problems has also been carried out using similar models and with the help of dynamic programming or classical calculus of variations or the minimum principle. In this study a thermal nuclear reactor is modeled as a two-dimensional lattice of fuel and control rods placed in an infinite-moderator in plane geometry. The two-group diffusion theory approximation is used for neutron transport. Space -time neutron balance equations are written for two groups and reduced to one space-time algebraic equation by using the two-dimensional Fourier transform. This equation is written at all fuel and control rod locations. Iodine -xenon and promethium-samarium dynamic equations are also written at fuel rod locations only. These equations are then linearized about an equilibrium point which is determined from the steady-state form of the original nonlinear system equations. After studying poisonless criticality, with and without control, and the stability of the open-loop system and after checking its controllability, a performance criterion is defined for the xenon-induced spatial flux oscillation problem in the form of a functional to be minimized. Linear -quadratic optimal control theory is then applied to solve the problem. To perform a variety of different additional useful studies, this formulation has potential for various extensions and variations; for example, different geometry of the problem, with possible extension to three dimensions, heterogeneous -homogeneous formulation to include, for example, homogeneously
Optimization of a Parallel Ocean General Circulation Model
NASA Technical Reports Server (NTRS)
Chao, Yi
1997-01-01
Global climate modeling is one of the grand chalenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting the future climate change.
NASA Astrophysics Data System (ADS)
Siade, A. J.; Prommer, H.; Welter, D.
2014-12-01
Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site
BDO-RFQ Program Complex of Modelling and Optimization of Charged Particle Dynamics
NASA Astrophysics Data System (ADS)
Ovsyannikov, D. A.; Ovsyannikov, A. D.; Antropov, I. V.; Kozynchenko, V. A.
2016-09-01
The article is dedicated to BDO Code program complex used for modelling and optimization of charged particle dynamics with consideration of interaction in RFQ accelerating structures. The structure of the program complex and its functionality are described; mathematical models of charged particle dynamics, interaction models and methods of optimization are given.
Model optimization of orthotropic distributed-mode loudspeaker using attached masses.
Lu, Guochao; Shen, Yong
2009-11-01
The orthotropic model of the plate is established and the genetic simulated annealing algorithm is developed for optimization of the mode distribution of the orthotropic plate. The experiment results indicate that the orthotropic model can simulate the real plate better. And optimization aimed at the equal distribution of the modes in the orthotropic model is made to improve the corresponding sound pressure responses.
The efficacy of using inventory data to develop optimal diameter increment models
Don C. Bragg
2002-01-01
Most optimal tree diameter growth models have arisen through either the conceptualization of physiological processes or the adaptation of empirical increment models. However, surprisingly little effort has been invested in the melding of these approaches even though it is possible to develop theoretically sound, computationally efficient optimal tree growth models...
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
Sorribas, Albert; Pozo, Carlos; Vilaprinyo, Ester; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Alves, Rui
2010-09-01
Cells are natural factories that can adapt to changes in external conditions. Their adaptive responses to specific stress situations are a result of evolution. In theory, many alternative sets of coordinated changes in the activity of the enzymes of each pathway could allow for an appropriate adaptive readjustment of metabolism in response to stress. However, experimental and theoretical observations show that actual responses to specific changes follow fairly well defined patterns that suggest an evolutionary optimization of that response. Thus, it is important to identify functional effectiveness criteria that may explain why certain patterns of change in cellular components and activities during adaptive response have been preferably maintained over evolutionary time. Those functional effectiveness criteria define sets of physiological requirements that constrain the possible adaptive changes and lead to different operation principles that could explain the observed response. Understanding such operation principles can also facilitate biotechnological and metabolic engineering applications. Thus, developing methods that enable the analysis of cellular responses from the perspective of identifying operation principles may have strong theoretical and practical implications. In this paper we present one such method that was designed based on nonlinear global optimization techniques. Our methodology can be used with a special class of nonlinear kinetic models known as GMA models and it allows for a systematic characterization of the physiological requirements that may underlie the evolution of adaptive strategies.
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
Optimization of GM(1,1) power model
NASA Astrophysics Data System (ADS)
Luo, Dang; Sun, Yu-ling; Song, Bo
2013-10-01
GM (1,1) power model is the expansion of traditional GM (1,1) model and Grey Verhulst model. Compared with the traditional models, GM (1,1) power model has the following advantage: The power exponent in the model which best matches the actual data values can be found by certain technology. So, GM (1,1) power model can reflect nonlinear features of the data, simulate and forecast with high accuracy. It's very important to determine the best power exponent during the modeling process. In this paper, according to the GM(1,1) power model of albino equation is Bernoulli equation, through variable substitution, turning it into the GM(1,1) model of the linear albino equation form, and then through the grey differential equation properly built, established GM(1,1) power model, and parameters with pattern search method solution. Finally, we illustrate the effectiveness of the new methods with the example of simulating and forecasting the promotion rates from senior secondary schools to higher education in China.
Optimization model using Markowitz model approach for reducing the number of dengue cases in Bandung
NASA Astrophysics Data System (ADS)
Yong, Benny; Chin, Liem
2017-05-01
Dengue fever is one of the most serious diseases and this disease can cause death. Currently, Indonesia is a country with the highest cases of dengue disease in Southeast Asia. Bandung is one of the cities in Indonesia that is vulnerable to dengue disease. The sub-districts in Bandung had different levels of relative risk of dengue disease. Dengue disease is transmitted to people by the bite of an Aedesaegypti mosquito that is infected with a dengue virus. Prevention of dengue disease is by controlling the vector mosquito. It can be done by various methods, one of the methods is fogging. The efforts made by the Health Department of Bandung through fogging had constraints in terms of limited funds. This problem causes Health Department selective in fogging, which is only done for certain locations. As a result, many sub-districts are not handled properly by the Health Department because of the unequal distribution of activities to prevent the spread of dengue disease. Thus, it needs the proper allocation of funds to each sub-district in Bandung for preventing dengue transmission optimally. In this research, the optimization model using Markowitz model approach will be applied to determine the allocation of funds should be given to each sub-district in Bandung. Some constraints will be added to this model and the numerical solution will be solved with generalized reduced gradient method using Solver software. The expected result of this research is the proportion of funds given to each sub-district in Bandung correspond to the level of risk of dengue disease in each sub-district in Bandung so that the number of dengue cases in this city can be reduced significantly.
Bentley, Jason; Sloan, Charlotte; Kawajiri, Yoshiaki
2013-03-08
This work demonstrates a systematic prediction-correction (PC) method for simultaneously modeling and optimizing nonlinear simulated moving bed (SMB) chromatography. The PC method uses model-based optimization, SMB startup data, isotherm model selection, and parameter estimation to iteratively refine model parameters and find optimal operating conditions in a matter of hours to ensure high purity constraints and achieve optimal productivity. The PC algorithm proceeds until the SMB process is optimized without manual tuning. In case studies, it is shown that a nonlinear isotherm model and parameter values are determined reliably using SMB startup data. In one case study, a nonlinear SMB system is optimized after only two changes of operating conditions following the PC algorithm. The refined isotherm models are validated by frontal analysis and perturbation analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Estimating the Optimal Spatial Complexity of a Water Quality Model Using Multi-Criteria Methods
NASA Astrophysics Data System (ADS)
Meixner, T.
2002-12-01
Discretizing the landscape into multiple smaller units appears to be a necessary step for improving the performance of water quality models. However there is a need for adequate falsification methods to discern between discretization that improves model performance and discretization that merely adds to model complexity. Multi-criteria optimization methods promise a way to increase the power of model discrimination and a path to increasing our ability in differentiating between good and bad model discretization methods. This study focuses on the optimal level of spatial discretization of a water quality model, the Alpine Hydrochemical Model of the Emerald Lake watershed in Sequoia National Park, California. The 5 models of the watershed differ in the degree of simplification that they represent from the real watershed. The simplest model is just a lumped model of the entire watershed. The most complex model takes the 5 main soil groups in the watershed and represents each with a modeling subunit as well as having subunits for rock and talus areas in the watershed. Each of these models was calibrated using stream discharge and three chemical fluxes jointly as optimization criteria using a Pareto optimization routine, MOCOM-UA. After optimization the 5 models were compared for their performance using model criteria not used in calibration, the variability of model parameter estimates, and comparison to the mean of observations as a predictor of stream chemical composition. Based on these comparisons, the results indicate that the model with only 2 terrestrial subunits had the optimal level of model complexity. This result shows that increasing model complexity, even using detailed site specific data, is not always rewarded with improved model performance. Additionally, this result indicates that the most important geographic element for modeling water quality in alpine watersheds is accurately delineating the boundary between areas of rock and areas containing either
Optimal SCR Control Using Data-Driven Models
Stevens, Andrew J.; Sun, Yannan; Lian, Jianming; Devarakonda, Maruthi N.; Parker, Gordon
2013-04-16
We present an optimal control solution for the urea injection for a heavy-duty diesel (HDD) selective catalytic reduction (SCR). The approach taken here is useful beyond SCR and could be applied to any system where a control strategy is desired and input-output data is available. For example, the strategy could also be used for the diesel oxidation catalyst (DOC) system. In this paper, we identify and validate a one-step ahead Kalman state-space estimator for downstream NOx using the bench reactor data of an SCR core sample. The test data was acquired using a 2010 Cummins 6.7L ISB production engine with a 2010 Cummins production aftertreatment system. We used a surrogate HDD federal test procedure (FTP), developed at Michigan Technological University (MTU), which simulates the representative transients of the standard FTP cycle, but has less engine speed/load points. The identified state-space model is then used to develop a tunable cost function that simultaneously minimizes NOx emissions and urea usage. The cost function is quadratic and univariate, thus the minimum can be computed analytically. We show the performance of the closed-loop controller in using a reduced-order discrete SCR simulator developed at MTU. Our experiments with the surrogate HDD-FTP data show that the strategy developed in this paper can be used to identify performance bounds for urea dose controllers.
A fuzzy convolution model for radiobiologically optimized radiotherapy margins.
Mzenda, Bongile; Hosseini-Ashrafi, Mir; Gegov, Alex; Brown, David J
2010-06-07
In this study we investigate the use of a new knowledge-based fuzzy logic technique to derive radiotherapy margins based on radiotherapy uncertainties and their radiobiological effects. The main radiotherapy uncertainties considered and used to build the model were delineation, set-up and organ motion-induced errors. The radiobiological effects of these combined errors, in terms of prostate tumour control probability and rectal normal tissue complication probability, were used to formulate the rule base and membership functions for a Sugeno type fuzzy system linking the error effect to the treatment margin. The defuzzified output was optimized by convolving it with a Gaussian convolution kernel to give a uniformly varying transfer function which was used to calculate the required treatment margins. The margin derived using the fuzzy technique showed good agreement compared to current prostate margins based on the commonly used margin formulation proposed by van Herk et al (2000 Int. J. Radiat. Oncol. Biol. Phys. 47 1121-35), and has nonlinear variation above combined errors of 5 mm standard deviation. The derived margin is on average 0.5 mm bigger than currently used margins in the region of small treatment uncertainties where margin reduction would be applicable. The new margin was applied in an intensity modulated radiotherapy prostate treatment planning example where margin reduction and a dose escalation regime were implemented, and by inducing equivalent treatment uncertainties, the resulting target and organs at risk doses were found to compare well to results obtained using currently recommended margins.
Polymer Electrolyte Membrane (PEM) Fuel Cells Modeling and Optimization
NASA Astrophysics Data System (ADS)
Zhang, Zhuqian; Wang, Xia; Shi, Zhongying; Zhang, Xinxin; Yu, Fan
2006-11-01
Performance of polymer electrolyte membrane (PEM) fuel cells is dependent on operating parameters and designing parameters. Operating parameters mainly include temperature, pressure, humidity and the flow rate of the inlet reactants. Designing parameters include reactants distributor patterns and dimensions, electrodes dimensions, and electrodes properties such as porosity, permeability and so on. This work aims to investigate the effects of various designing parameters on the performance of PEM fuel cells, and the optimum values will be determined under a given operating condition.A three-dimensional steady-state electrochemical mathematical model was established where the mass, fluid and thermal transport processes are considered as well as the electrochemical reaction. A Powell multivariable optimization algorithm will be applied to investigate the optimum values of designing parameters. The objective function is defined as the maximum potential of the electrolyte fluid phase at the membrane/cathode interface at a typical value of the cell voltage. The robustness of the optimum design of the fuel cell under different cell potentials will be investigated using a statistical sensitivity analysis. By comparing with the reference case, the results obtained here provide useful tools for a better design of fuel cells.
Designing the optimal convolution kernel for modeling the motion blur
NASA Astrophysics Data System (ADS)
Jelinek, Jan
2011-06-01
Motion blur acts on an image like a two dimensional low pass filter, whose spatial frequency characteristic depends both on the trajectory of the relative motion between the scene and the camera and on the velocity vector variation along it. When motion during exposure is permitted, the conventional, static notions of both the image exposure and the scene-toimage mapping become unsuitable and must be revised to accommodate the image formation dynamics. This paper develops an exact image formation model for arbitrary object-camera relative motion with arbitrary velocity profiles. Moreover, for any motion the camera may operate in either continuous or flutter shutter exposure mode. Its result is a convolution kernel, which is optimally designed for both the given motion and sensor array geometry, and hence permits the most accurate computational undoing of the blurring effects for the given camera required in forensic and high security applications. The theory has been implemented and a few examples are shown in the paper.
Optimizing phonon space in the phonon-coupling model
NASA Astrophysics Data System (ADS)
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.
2017-08-01
We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.
Statistical significance across multiple optimization models for community partition
NASA Astrophysics Data System (ADS)
Li, Ju; Li, Hui-Jia; Mao, He-Jin; Chen, Junhua
2016-05-01
The study of community structure is an important problem in a wide range of applications, which can help us understand the real network system deeply. However, due to the existence of random factors and error edges in real networks, how to measure the significance of community structure efficiently is a crucial question. In this paper, we present a novel statistical framework computing the significance of community structure across multiple optimization methods. Different from the universal approaches, we calculate the similarity between a given node and its leader and employ the distribution of link tightness to derive the significance score, instead of a direct comparison to a randomized model. Based on the distribution of community tightness, a new “p-value” form significance measure is proposed for community structure analysis. Specially, the well-known approaches and their corresponding quality functions are unified to a novel general formulation, which facilitates in providing a detailed comparison across them. To determine the position of leaders and their corresponding followers, an efficient algorithm is proposed based on the spectral theory. Finally, we apply the significance analysis to some famous benchmark networks and the good performance verified the effectiveness and efficiency of our framework.
Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.
An, Yan; Zou, Zhihong; Zhao, Yanfei
2015-03-01
An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.
Bottom friction optimization for barotropic tide modelling using the HYbrid Coordinate Ocean Model
NASA Astrophysics Data System (ADS)
Boutet, Martial; Lathuilière, Cyril; Baraille, Rémy; Son Hoang, Hong; Morel, Yves
2014-05-01
We can list several ways to improve tide modelling at a regional or coastal scale: a more precise and refined bathymetry, better boundary conditions (the way they are implemented and the precision of global tide atlases used) and the representation of the dissipation linked to the bottom friction. Nevertheless, the most promising improvement is the bottom friction representation. Indeed, bathymetric databases, especially in coastal areas, are more and more precise and global tide models performances are better than ever (mean discrepancy between models and tide gauges is about 1 cm for M2 tide). Bottom friction is often parameterized with a quadratic term and a constant coefficient generally taken between 2.5 10-3 and 3.0 10-3. Consequently, we need a more physically consistent approach to improve bottom friction in coastal areas. The first improvement is to enable the computation of a time- and space-dependent friction coefficient. It is obtained by vertical integration of a turbulent horizontal velocity profile. The new parameter to be prescribed for the computation is the bottom roughness, z0, that depends on a large panel of physical properties and processes (sediment properties, existence of ripples and dunes, wave-current interactions, ...). The context of increasing computer resources and data availability enables the possibility to use new methods of data assimilation and optimization. The method used for this study is the simultaneous perturbation stochastic approximation (SPSA) which consists in the approximation of the gradient based on a fixed number of cost function measurements, regardless of the dimension of the vector to be estimated. Indeed, each cost function measurement is obtained by randomly perturbing every component of the parameter vector. An important feature of SPSA is its relative ease of implementation. In particular, the method does not require the development of linear and adjoint version of the circulation model. The algorithm is
A Simplified Model of ARIS for Optimal Controller Design
NASA Technical Reports Server (NTRS)
Beech, Geoffrey S.; Hampton, R. David; Kross, Denny (Technical Monitor)
2001-01-01
Many space-science experiments require active vibration isolation. Boeing's Active Rack Isolation System (ARIS) isolates experiments at the rack (vs. experiment or sub-experiment) level, with multi e experiments per rack. An ARIS-isolated rack typically employs eight actuators and thirteen umbilicals; the umbilicals provide services such as power, data transmission, and cooling. Hampton, et al., used "Kane's method" to develop an analytical, nonlinear, rigid-body model of ARIS that includes full actuator dynamics (inertias). This model, less the umbilicals, was first implemented for simulation by Beech and Hampton; they developed and tested their model using two commercial-off-the-shelf (COTS) software packages. Rupert, et al., added umbilical-transmitted disturbances to this nonlinear model. Because the nonlinear model, even for the untethered system, is both exceedingly complex and "encapsulated" inside these COTS tools, it is largely inaccessible to ARIS controller designers. This paper shows that ISPR rattle-space constraints and small ARIS actuator masses permit considerable model simplification, without significant loss of fidelity. First, for various loading conditions, comparisons are made between the dynamic responses of the nonlinear model (untethered) and a truth model. Then comparisons are made among nonlinear, linearized, and linearized reduced-mass models. It is concluded that these three models all capture the significant system rigid-body dynamics, with the third being preferred due to its relative simplicity.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Optimal mutation rates in dynamic environments: The eigen model
NASA Astrophysics Data System (ADS)
Ancliff, Mark; Park, Jeong-Man
2011-03-01
We consider the Eigen quasispecies model with a dynamic environment. For an environment with sharp-peak fitness in which the most-fit sequence moves by k spin-flips each period T we find an asymptotic stationary state in which the quasispecies population changes regularly according to the regular environmental change. From this stationary state we estimate the maximum and the minimum mutation rates for a quasispecies to survive under the changing environment and calculate the optimum mutation rate that maximizes the population growth. Interestingly we find that the optimum mutation rate in the Eigen model is lower than that in the Crow-Kimura model, and at their optimum mutation rates the corresponding mean fitness in the Eigen model is lower than that in the Crow-Kimura model, suggesting that the mutation process which occurs in parallel to the replication process as in the Crow-Kimura model gives an adaptive advantage under changing environment.
Optimization methods for thermal modeling of optomechanical systems
NASA Technical Reports Server (NTRS)
Papalexandris, M.; Milman, M.; Levine-West, M.
2001-01-01
The proposed numerical techniques are briefly described and compared to existing algorithms. Their accuracy and robustness are demonstrated through numerical tests with models from ongoing NASA missions.
NASA Technical Reports Server (NTRS)
Schwaab, Douglas G.
1991-01-01
A mathematical programing model is presented to optimize the selection of Orbital Replacement Unit on-orbit spares for the Space Station. The model maximizes system availability under the constraints of logistics resupply-cargo weight and volume allocations.
On application of optimal control to SEIR normalized models: Pros and cons.
de Pinho, Maria do Rosario; Nogueira, Filipa Nunes
2017-02-01
In this work we normalize a SEIR model that incorporates exponential natural birth and death, as well as disease-caused death. We use optimal control to control by vaccination the spread of a generic infectious disease described by a normalized model with L1 cost. We discuss the pros and cons of SEIR normalized models when compared with classical models when optimal control with L1 costs are considered. Our discussion highlights the role of the cost. Additionally, we partially validate our numerical solutions for our optimal control problem with normalized models using the Maximum Principle.
Decomposition method of complex optimization model based on global sensitivity analysis
NASA Astrophysics Data System (ADS)
Qiu, Qingying; Li, Bing; Feng, Peien; Gao, Yu
2014-07-01
The current research of the decomposition methods of complex optimization model is mostly based on the principle of disciplines, problems or components. However, numerous coupling variables will appear among the sub-models decomposed, thereby make the efficiency of decomposed optimization low and the effect poor. Though some collaborative optimization methods are proposed to process the coupling variables, there lacks the original strategy planning to reduce the coupling degree among the decomposed sub-models when we start decomposing a complex optimization model. Therefore, this paper proposes a decomposition method based on the global sensitivity information. In this method, the complex optimization model is decomposed based on the principle of minimizing the sensitivity sum between the design functions and design variables among different sub-models. The design functions and design variables, which are sensitive to each other, will be assigned to the same sub-models as much as possible to reduce the impacts to other sub-models caused by the changing of coupling variables in one sub-model. Two different collaborative optimization models of a gear reducer are built up separately in the multidisciplinary design optimization software iSIGHT, the optimized results turned out that the decomposition method proposed in this paper has less analysis times and increases the computational efficiency by 29.6%. This new decomposition method is also successfully applied in the complex optimization problem of hydraulic excavator working devices, which shows the proposed research can reduce the mutual coupling degree between sub-models. This research proposes a decomposition method based on the global sensitivity information, which makes the linkages least among sub-models after decomposition, and provides reference for decomposing complex optimization models and has practical engineering significance.
Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Khalilnia, M. H.; Ghaemirad, T.; Abbaspour, R. A.
2013-09-01
In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM+ for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75 %.
Optimizing efficiency of height modeling for extensive forest inventories.
T.M. Barrett
2006-01-01
Although critical to monitoring forest ecosystems, inventories are expensive. This paper presents a generalizable method for using an integer programming model to examine tradeoffs between cost and estimation error for alternative measurement strategies in forest inventories. The method is applied to an example problem of choosing alternative height-modeling strategies...
Regression Model Optimization for the Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2009-01-01
A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.
Heat engines at optimal power: Low-dissipation versus endoreversible model
NASA Astrophysics Data System (ADS)
Johal, Ramandeep S.
2017-07-01
The low-dissipation model and the endoreversible model of heat engines are two of the most commonly studied models of machines in finite-time thermodynamics. In this paper we compare the performance characteristics of these two models under optimal power output. We point out a basic equivalence between them, in the linear response regime.
Oneida Tribe of Indians of Wisconsin Energy Optimization Model
Troge, Michael
2014-12-01
Oneida Nation is located in Northeast Wisconsin. The reservation is approximately 96 square miles (8 miles x 12 miles), or 65,000 acres. The greater Green Bay area is east and adjacent to the reservation. A county line roughly splits the reservation in half; the west half is in Outagamie County and the east half is in Brown County. Land use is predominantly agriculture on the west 2/3 and suburban on the east 1/3 of the reservation. Nearly 5,000 tribally enrolled members live in the reservation with a total population of about 21,000. Tribal ownership is scattered across the reservation and is about 23,000 acres. Currently, the Oneida Tribe of Indians of Wisconsin (OTIW) community members and facilities receive the vast majority of electrical and natural gas services from two of the largest investor-owned utilities in the state, WE Energies and Wisconsin Public Service. All urban and suburban buildings have access to natural gas. About 15% of the population and five Tribal facilities are in rural locations and therefore use propane as a primary heating fuel. Wood and oil are also used as primary or supplemental heat sources for a small percent of the population. Very few renewable energy systems, used to generate electricity and heat, have been installed on the Oneida Reservation. This project was an effort to develop a reasonable renewable energy portfolio that will help Oneida to provide a leadership role in developing a clean energy economy. The Energy Optimization Model (EOM) is an exploration of energy opportunities available to the Tribe and it is intended to provide a decision framework to allow the Tribe to make the wisest choices in energy investment with an organizational desire to establish a renewable portfolio standard (RPS).
Optimal mutation rates in dynamic environments: The Eigen model
NASA Astrophysics Data System (ADS)
Ancliff, Mark; Park, Jeong-Man
2010-08-01
We consider the Eigen quasispecies model with a dynamic environment. For an environment with sharp-peak fitness in which the most-fit sequence moves by k spin-flips each period T we find an asymptotic stationary state in which the quasispecies population changes regularly according to the regular environmental change. From this stationary state we estimate the maximum and the minimum mutation rates for a quasispecies to survive under the changing environment and calculate the optimum mutation rate that maximizes the population growth. Interestingly we find that the optimum mutation rate in the Eigen model is lower than that in the Crow-Kimura model, and at their optimum mutation rates the corresponding mean fitness in the eigenmodel is lower than that in the Crow-Kimura model, suggesting that the mutation process which occurs in parallel to the replication process as in the Crow-Kimura model gives an adaptive advantage under changing environment.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
High-throughput generation, optimization and analysis of genome-scale metabolic models.
Henry, C. S.; DeJongh, M.; Best, A. A.; Frybarger, P. M.; Linsay, B.; Stevens, R. L.
2010-09-01
Genome-scale metabolic models have proven to be valuable for predicting organism phenotypes from genotypes. Yet efforts to develop new models are failing to keep pace with genome sequencing. To address this problem, we introduce the Model SEED, a web-based resource for high-throughput generation, optimization and analysis of genome-scale metabolic models. The Model SEED integrates existing methods and introduces techniques to automate nearly every step of this process, taking {approx}48 h to reconstruct a metabolic model from an assembled genome sequence. We apply this resource to generate 130 genome-scale metabolic models representing a taxonomically diverse set of bacteria. Twenty-two of the models were validated against available gene essentiality and Biolog data, with the average model accuracy determined to be 66% before optimization and 87% after optimization.
Optimal harvesting for a predator-prey agent-based model using difference equations.
Oremland, Matthew; Laubenbacher, Reinhard
2015-03-01
In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.
Kernel method based human model for enhancing interactive evolutionary optimization.
Pei, Yan; Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly.
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
Modeling Illicit Drug Use Dynamics and Its Optimal Control Analysis
2015-01-01
The global burden of death and disability attributable to illicit drug use, remains a significant threat to public health for both developed and developing nations. This paper presents a new mathematical modeling framework to investigate the effects of illicit drug use in the community. In our model the transmission process is captured as a social “contact” process between the susceptible individuals and illicit drug users. We conduct both epidemic and endemic analysis, with a focus on the threshold dynamics characterized by the basic reproduction number. Using our model, we present illustrative numerical results with a case study in Cape Town, Gauteng, Mpumalanga and Durban communities of South Africa. In addition, the basic model is extended to incorporate time dependent intervention strategies. PMID:26819625
Medical Evacuation and Treatment Capabilities Optimization Model (METCOM)
2005-09-01
1 B . HEALTH SERVICE SUPPORT (HSS) SYSTEM...A. MULTIPERIOD/INTER-TEMPORAL NETWORKS..............................25 B . EVACUATION...29 A. OBJECTIVES OF THE MODEL ................................................................29 B . STRUCTURE OF THE GENERAL
Optimal bispectrum constraints on single-field models of inflation
Anderson, Gemma J.; Regan, Donough; Seery, David E-mail: D.Regan@sussex.ac.uk
2014-07-01
We use WMAP 9-year bispectrum data to constrain the free parameters of an 'effective field theory' describing fluctuations in single-field inflation. The Lagrangian of the theory contains a finite number of operators associated with unknown mass scales. Each operator produces a fixed bispectrum shape, which we decompose into partial waves in order to construct a likelihood function. Based on this likelihood we are able to constrain four linearly independent combinations of the mass scales. As an example of our framework we specialize our results to the case of 'Dirac-Born-Infeld' and 'ghost' inflation and obtain the posterior probability for each model, which in Bayesian schemes is a useful tool for model comparison. Our results suggest that DBI-like models with two or more free parameters are disfavoured by the data by comparison with single-parameter models in the same class.
Optimization Model for Irrigation Planning in Heterogenous Area
NASA Astrophysics Data System (ADS)
Kangrang, Anongrit; Phumphan, Anujit; Chaleeraktrakoon, Chavalit
This study proposes an allocation LP model that can take into account heterogeneity of land area. The divided scenario into several sub-areas based on suitable soil type for each crops was used to represent the heterogeneous character in term of water requirement and crop yield. The proposed model was applied to find the dry-season (January-May) crop pattern of the Nong Wei Irrigation Project which located in the Northeast Region of Thailand. The records of seasonal flow, requested areas, crop water requirements, evaporation and effective rainfalls of the project were used for this illustrative application. Results showed that the proposed LP model gave the optimum crop pattern with net seasonal profit which corresponding seasonal available water and required area. It provided the highest profit as compare to the existing LP model that considering homogeneous project. The obtained patterns of considering heterogeneity corresponded to the available land areas of the suitable soil type.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
ERIC Educational Resources Information Center
Wu, Jason H.
2013-01-01
This study was designed to examine the construct of academic optimism and its relationship with collective responsibility in a sample of Taiwan elementary schools. The construct of academic optimism was tested using confirmatory factor analysis, and the whole structural model was tested with a structural equation modeling analysis. The data were…
ERIC Educational Resources Information Center
Wu, Jason H.
2013-01-01
This study was designed to examine the construct of academic optimism and its relationship with collective responsibility in a sample of Taiwan elementary schools. The construct of academic optimism was tested using confirmatory factor analysis, and the whole structural model was tested with a structural equation modeling analysis. The data were…
The analysis of optimal singular controls for SEIR model of tuberculosis
NASA Astrophysics Data System (ADS)
Marpaung, Faridawaty; Rangkuti, Yulita M.; Sinaga, Marlina S.
2014-12-01
The optimally of singular control for SEIR model of Tuberculosis is analyzed. There are controls that correspond to time of the vaccination and treatment schedule. The optimally of singular control is obtained by differentiate a switching function of the model. The result shows that vaccination and treatment control are singular.
Public Health Analysis Transport Optimization Model v. 1.0
Beyeler, Walt; Finley, Patrick; Walser, Alex; Frazier, Chris; Mitchell, Michael
2016-10-05
PHANTOM models logistic functions of national public health systems. The system enables public health officials to visualize and coordinate options for public health surveillance, diagnosis, response and administration in an integrated analytical environment. Users may simulate and analyze system performance applying scenarios that represent current conditions or future contingencies what-if analyses of potential systemic improvements. Public health networks are visualized as interactive maps, with graphical displays of relevant system performance metrics as calculated by the simulation modeling components.
Data-Adaptable Modeling and Optimization for Runtime Adaptable Systems
2016-06-08
Adaptability-as-a-service for embedded real-time dataflow systems b. Captures semantic and programmatic composability by design DISTRIBUTION A: Distribution... embedded systems (DAES) requires new methodologies and tools to support both design and synthesis. In this paper, we introduce a modeling tool, called...K. Prasanna, and A. Ledeczi, 2001. “MILAN: A Model Based Integrated Simulation Framework for Design of Embedded Systems ,” In Proc. of the 2001 ACM
Optimal post-experiment estimation of poorly modeled dynamic systems
NASA Technical Reports Server (NTRS)
Mook, D. Joseph
1988-01-01
Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.
2008-01-01
Constrained optimization problems arise in a wide variety of scientific and engineering applications. Since several single recurrent neural networks when applied to solve constrained optimization problems for real-time engineering applications have shown some limitations, cooperative recurrent neural network approaches have been developed to overcome drawbacks of these single recurrent neural networks. This paper surveys in details work on cooperative recurrent neural networks for solving constrained optimization problems and their engineering applications, and points out their standing models from viewpoint of both convergence to the optimal solution and model complexity. We provide examples and comparisons to shown advantages of these models in the given applications. PMID:19003467
Optimal control for a tuberculosis model with reinfection and post-exposure interventions.
Silva, Cristiana J; Torres, Delfim F M
2013-08-01
We apply optimal control theory to a tuberculosis model given by a system of ordinary differential equations. Optimal control strategies are proposed to minimize the cost of interventions, considering reinfection and post-exposure interventions. They depend on the parameters of the model and reduce effectively the number of active infectious and persistent latent individuals. The time that the optimal controls are at the upper bound increase with the transmission coefficient. A general explicit expression for the basic reproduction number is obtained and its sensitivity with respect to the model parameters is discussed. Numerical results show the usefulness of the optimization strategies.
Zomorrodi, Ali R; Lafontaine Rivera, Jimmy G; Liao, James C; Maranas, Costas D
2013-09-01
The ensemble modeling (EM) approach has shown promise in capturing kinetic and regulatory effects in the modeling of metabolic networks. Efficacy of the EM procedure relies on the identification of model parameterizations that adequately describe all observed metabolic phenotypes upon perturbation. In this study, we propose an optimization-based algorithm for the systematic identification of genetic/enzyme perturbations to maximally reduce the number of models retained in the ensemble after each round of model screening. The key premise here is to design perturbations that will maximally scatter the predicted steady-state fluxes over the ensemble parameterizations. We demonstrate the applicability of this procedure for an Escherichia coli metabolic model of central metabolism by successively identifying single, double, and triple enzyme perturbations that cause the maximum degree of flux separation between models in the ensemble. Results revealed that optimal perturbations are not always located close to reaction(s) whose fluxes are measured, especially when multiple perturbations are considered. In addition, there appears to be a maximum number of simultaneous perturbations beyond which no appreciable increase in the divergence of flux predictions is achieved. Overall, this study provides a systematic way of optimally designing genetic perturbations for populating the ensemble of models with relevant model parameterizations.
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
OPTIMIZING MODEL PERFORMANCE: VARIABLE SIZE RESOLUTION IN CLOUD CHEMISTRY MODELING. (R826371C005)
Under many conditions size-resolved aqueous-phase chemistry models predict higher sulfate production rates than comparable bulk aqueous-phase models. However, there are special circumstances under which bulk and size-resolved models offer similar predictions. These special con...
Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models
2015-09-12
AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-11-1-0239 5c. PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY
A Hydro System Modeling Hierarchy to Optimize the Operation of the BC Hydroelectric System
NASA Astrophysics Data System (ADS)
Shawwash, Z.
2012-12-01
We present the Hydro System Modeling Hierarchy that we have developed to optimize the operation of the BC Hydro system in British Columbia, Canada. The Hierarchy consists of a number of simulation and optimization models that we have developed over the past twelve years in a research program under the Grant-in-Aid Agreement between BC Hydro and the Department of Civil Engineering at UBC. We first provide an overview of the BC Hydro system and then present our modeling framework and discuss a number of optimization modeling tools that we have developed and are currently in use at BC Hydro and we briefly outline ongoing research and model development work supported by BC Hydro and leveraged by a Natural Sciences and Engineering Research Council's (NSERC) Collaborative Research and Development (CRD) grants.he BC Hydro System Optimization Modeling Hierarchy
Process Cost Modeling for Multi-Disciplinary Design Optimization
NASA Technical Reports Server (NTRS)
Bao, Han P.; Freeman, William (Technical Monitor)
2002-01-01
For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to
Optimizing modelling in iterative image reconstruction for preclinical pinhole PET
NASA Astrophysics Data System (ADS)
Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.
2016-05-01
The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.
Hartmann, András; Lemos, João M; Vinga, Susana
2015-08-01
The aim of inverse modeling is to capture the systems׳ dynamics through a set of parameterized Ordinary Differential Equations (ODEs). Parameters are often required to fit multiple repeated measurements or different experimental conditions. This typically leads to a multi-objective optimization problem that can be formulated as a non-convex optimization problem. Modeling of glucose utilization of Lactococcus lactis bacteria is considered using in vivo Nuclear Magnetic Resonance (NMR) measurements in perturbation experiments. We propose an ODE model based on a modified time-varying exponential decay that is flexible enough to model several different experimental conditions. The starting point is an over-parameterized non-linear model that will be further simplified through an optimization procedure with regularization penalties. For the parameter estimation, a stochastic global optimization method, particle swarm optimization (PSO) is used. A regularization is introduced to the identification, imposing that parameters should be the same across several experiments in order to identify a general model. On the remaining parameter that varies across the experiments a function is fit in order to be able to predict new experiments for any initial condition. The method is cross-validated by fitting the model to two experiments and validating the third one. Finally, the proposed model is integrated with existing models of glycolysis in order to reconstruct the remaining metabolites. The method was found useful as a general procedure to reduce the number of parameters of unidentifiable and over-parameterized models, thus supporting feature selection methods for parametric models.
NASA Astrophysics Data System (ADS)
Yi, G. L.; Sui, Y. K.
2015-10-01
The objective and constraint functions related to structural optimization designs are classified into economic and performance indexes in this paper. The influences of their different roles in model construction of structural topology optimization are also discussed. Furthermore, two structural topology optimization models, optimizing a performance index under the limitation of an economic index, represented by the minimum compliance with a volume constraint (MCVC) model, and optimizing an economic index under the limitation of a performance index, represented by the minimum weight with a displacement constraint (MWDC) model, are presented. Based on a comparison of numerical example results, the conclusions can be summarized as follows: (1) under the same external loading and displacement performance conditions, the results of the MWDC model are almost equal to those of the MCVC model; (2) the MWDC model overcomes the difficulties and shortcomings of the MCVC model; this makes the MWDC model more feasible in model construction; (3) constructing a model of minimizing an economic index under the limitations of performance indexes is better at meeting the needs of practical engineering problems and completely satisfies safety and economic requirements in mechanical engineering, which have remained unchanged since the early days of mechanical engineering.
2006-09-01
BASED OPTIMIZATION OF ADVANCED SOLAR CELL DESIGNS MODELED IN SILVACO ATLASTM by James Utsler September 2006 Thesis Co-Advisors...TITLE AND SUBTITLE Genetic Algorithm Based Optimization of Advanced Solar Cell Designs Modeled in SIlvaco ATLASTM 6. AUTHOR(S) James D. Utsler 5...was modeled using the Silvaco ATLASTM software. The output of the ATLASTM simulation runs served as the input to the genetic algorithm. The genetic
On the model-based optimization of secreting mammalian cell (GS-NS0) cultures.
Kiparissides, A; Pistikopoulos, E N; Mantalaris, A
2015-03-01
The global bio-manufacturing industry requires improved process efficiency to satisfy the increasing demands for biochemicals, biofuels, and biologics. The use of model-based techniques can facilitate the reduction of unnecessary experimentation and reduce labor and operating costs by identifying the most informative experiments and providing strategies to optimize the bioprocess at hand. Herein, we investigate the potential of a research methodology that combines model development, parameter estimation, global sensitivity analysis, and selection of optimal feeding policies via dynamic optimization methods to improve the efficiency of an industrially relevant bioprocess. Data from a set of batch experiments was used to estimate values for the parameters of an unstructured model describing monoclonal antibody (mAb) production in GS-NS0 cell cultures. Global Sensitivity Analysis (GSA) highlighted parameters with a strong effect on the model output and data from a fed-batch experiment were used to refine their estimated values. Model-based optimization was used to identify a feeding regime that maximized final mAb titer. An independent fed-batch experiment was conducted to validate both the results of the optimization and the predictive capabilities of the developed model. The successful integration of wet-lab experimentation and mathematical model development, analysis, and optimization represents a unique, novel, and interdisciplinary approach that addresses the complicated research and industrial problem of model-based optimization of cell based processes.