Modeling using optimization routines
NASA Technical Reports Server (NTRS)
Thomas, Theodore
1995-01-01
Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.
HOMER® Micropower Optimization Model
Lilienthal, P.
2005-01-01
NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.
Optimization in Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Marsden, Alison L.
2014-01-01
Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.
NEMO Oceanic Model Optimization
NASA Astrophysics Data System (ADS)
Epicoco, I.; Mocavero, S.; Murli, A.; Aloisio, G.
2012-04-01
NEMO is an oceanic model used by the climate community for stand-alone or coupled experiments. Its parallel implementation, based on MPI, limits the exploitation of the emerging computational infrastructures at peta and exascale, due to the weight of communications. As case study we considered the MFS configuration developed at INGV with a resolution of 1/16° tailored on the Mediterranenan Basin. The work is focused on the analysis of the code on the MareNostrum cluster and on the optimization of critical routines. The first performance analysis of the model aimed at establishing how much the computational performance are influenced by the GPFS file system or the local disks and wich is the best domain decomposition. The results highlight that the exploitation of local disks can reduce the wall clock time up to 40% and that the best performance is achieved with a 2D decomposition when the local domain has a square shape. A deeper performance analysis highlights the obc_rad, dyn_spg and tra_adv routines are the most time consuming routines. The obc_rad implements the evaluation of the open boundaries and it has been the first routine to be optimized. The communication pattern implemented in obc_rad routine has been redesigned. Before the introduction of the optimizations all processes were involved in the communication, but only the processes on the boundaries have the actual data to be exchanged and only the data on the boundaries must be exchanged. Moreover the data along the vertical levels are "packed" and sent with only one MPI_send invocation. The overall efficiency increases compared with the original version, as well as the parallel speed-up. The execution time was reduced of about 33.81%. The second phase of optimization involved the SOR solver routine, implementing the Red-Black Successive-Over-Relaxation method. The high frequency of exchanging data among processes represent the most part of the overall communication time. The number of communication is
Pyomo : Python Optimization Modeling Objects.
Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul
2010-11-01
The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.
Risk modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-09-01
Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.
Optimal designs for copula models
Perrone, E.; Müller, W.G.
2016-01-01
Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616
Optimizing a tandem disk model
Healey, J.V.
1983-07-01
A very simple physicomathematical model, in which thin straight blades with zero drag skim across a plane rectangular disk, shows that the maximum power coefficient attains the classical maximum of 0.593 over a range of T and a zero or small negative value of alpha/sub 0/. This maximum appears independent of sigma and there are values of T and alpha/sub 0/ for which the speed through the disk becomes complex and the model breaks down. Extending this model to a tandem disk system leads to a difficulty in defining the power coefficient. Attempts to optimize the system output based on reference areas A/sub 1/, A/sub 2/, and A/sub 4/ prove futile and the sum of the coefficients is chosen for this purpose. For thin blades and zero drag the analytic solution is available and it shows that the maximum value of 2 X 0.593 is attained over a narrow range of slightly negative alpha/sub 0/ (blade nose in) and medium values of T. The maximum is independent of sigma. As T is increased, the model breaks down either after C /SUB psum/ becomes large and negative or after backflow through the downwind disk occurs. There appears to be no requirement on load distribution between the disks. By comparison, modeling a machine with NACA 0012 blades at Re = 1.34 X 10/sup 6/ shows that the maximum value of C /SUB psum/ depends on the solidity. For example, at sigma = 0.4, the maximum value of C /SUB psum/ is 83% of 2 X 0.593. At such high values of sigma, however, the ranges of alpha/sub 0/ and T over which solutions are available become very limited.
Branch strategies - Modeling and optimization
NASA Technical Reports Server (NTRS)
Dubey, Pradeep K.; Flynn, Michael J.
1991-01-01
The authors provide a common platform for modeling different schemes for reducing the branch-delay penalty in pipelined processors as well as evaluating the associated increased instruction bandwidth. Their objective is twofold: to develop a model for different approaches to the branch problem and to help select an optimal strategy after taking into account additional i-traffic generated by branch strategies. The model presented provides a flexible tool for comparing different branch strategies in terms of the reduction it offers in average branch delay and also in terms of the associated cost of wasted instruction fetches. This additional criterion turns out to be a valuable consideration in choosing between two strategies that perform almost equally. More importantly, it provides a better insight into the expected overall system performance. Simple compiler-support-based low-implementation-cost strategies can be very effective under certain conditions. An active branch prediction scheme based on loop buffers can be as competitive as a branch-target-buffer based strategy.
Optimal Appearance Model for Visual Tracking
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639
Optimal Appearance Model for Visual Tracking.
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639
How Optimal Is the Optimization Model?
ERIC Educational Resources Information Center
Heine, Bernd
2013-01-01
Pieter Muysken's article on modeling and interpreting language contact phenomena constitutes an important contribution.The approach chosen is a top-down one, building on the author's extensive knowledge of all matters relating to language contact. The paper aims at integrating a wide range of factors and levels of social, cognitive, and…
Optimization of solver for gas flow modeling
NASA Astrophysics Data System (ADS)
Savichkin, D.; Dodulad, O.; Kloss, Yu
2014-05-01
The main purpose of the work is optimization of the solver for rarefied gas flow modeling based on the Boltzmann equation. Optimization method is based on SIMD extensions for ×86 processors. Computational code is profiled and manually optimized with SSE instructions. Heat flow, shock waves and Knudsen pump are modeled with optimized solver. Dependencies of computational time from mesh sizes and CPU capabilities are provided.
Optimal Decision Making in Neural Inhibition Models
ERIC Educational Resources Information Center
van Ravenzwaaij, Don; van der Maas, Han L. J.; Wagenmakers, Eric-Jan
2012-01-01
In their influential "Psychological Review" article, Bogacz, Brown, Moehlis, Holmes, and Cohen (2006) discussed optimal decision making as accomplished by the drift diffusion model (DDM). The authors showed that neural inhibition models, such as the leaky competing accumulator model (LCA) and the feedforward inhibition model (FFI), can mimic the…
Multiobjective Optimization Of An Extremal Evolution Model
NASA Astrophysics Data System (ADS)
Elettreby, Mohamed Fathey
2005-05-01
We propose a two-dimensional model for a co-evolving ecosystem that generalizes the extremal coupled map lattice model. The model takes into account the concept of multiobjective optimization. We find that the system is self-organized into a critical state. The distribution of avalanche sizes follows a power law.
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
A DSN optimal spacecraft scheduling model
NASA Technical Reports Server (NTRS)
Webb, W. A.
1982-01-01
A computer model is described which uses mixed-integer linear programming to provide optimal DSN spacecraft schedules given a mission set and specified scheduling requirements. A solution technique is proposed which uses Bender's Method and a heuristic starting algorithm.
Evaluation of stochastic reservoir operation optimization models
NASA Astrophysics Data System (ADS)
Celeste, Alcigeimes B.; Billib, Max
2009-09-01
This paper investigates the performance of seven stochastic models used to define optimal reservoir operating policies. The models are based on implicit (ISO) and explicit stochastic optimization (ESO) as well as on the parameterization-simulation-optimization (PSO) approach. The ISO models include multiple regression, two-dimensional surface modeling and a neuro-fuzzy strategy. The ESO model is the well-known and widely used stochastic dynamic programming (SDP) technique. The PSO models comprise a variant of the standard operating policy (SOP), reservoir zoning, and a two-dimensional hedging rule. The models are applied to the operation of a single reservoir damming an intermittent river in northeastern Brazil. The standard operating policy is also included in the comparison and operational results provided by deterministic optimization based on perfect forecasts are used as a benchmark. In general, the ISO and PSO models performed better than SDP and the SOP. In addition, the proposed ISO-based surface modeling procedure and the PSO-based two-dimensional hedging rule showed superior overall performance as compared with the neuro-fuzzy approach.
Optimization-Based Models of Muscle Coordination
Prilutsky, Boris I.; Zatsiorsky, Vladimir M.
2010-01-01
Optimization-based models may provide reasonably accurate estimates of activation and force patterns of individual muscles in selected well-learned tasks with submaximal efforts. Such optimization criteria as minimum energy expenditure, minimum muscle fatigue, and minimum sense of effort seem most promising. PMID:11800497
Optimization-based models of muscle coordination.
Prilutsky, Boris I; Zatsiorsky, Vladimir M
2002-01-01
Optimization-based models may provide reasonably accurate estimates of activation and force patterns of individual muscles in selected well-learned tasks with submaximal efforts. Such optimization criteria as minimum energy expenditure, minimum muscle fatigue, and minimum sense of effort seem most promising. PMID:11800497
Modelling and Optimizing Mathematics Learning in Children
ERIC Educational Resources Information Center
Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus
2013-01-01
This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…
Enhanced index tracking modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. S.; Hj. Jaaman, Saiful Hafizah; Ismail, Hamizun bin
2013-09-01
Enhanced index tracking is a popular form of passive fund management in stock market. It is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the risk. Enhanced index tracking aims to generate excess return over the return achieved by the index without purchasing all of the stocks that make up the index by establishing an optimal portfolio. The objective of this study is to determine the optimal portfolio composition and performance by using weighted model in enhanced index tracking. Weighted model focuses on the trade-off between the excess return and the risk. The results of this study show that the optimal portfolio for the weighted model is able to outperform the Malaysia market index which is Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.
Optimal combinations of specialized conceptual hydrological models
NASA Astrophysics Data System (ADS)
Kayastha, Nagendra; Lal Shrestha, Durga; Solomatine, Dimitri
2010-05-01
In hydrological modelling it is a usual practice to use a single lumped conceptual model for hydrological simulations at all regimes. However often the simplicity of the modelling paradigm leads to errors in represent all the complexity of the physical processes in the catchment. A solution could be to model various hydrological processes separately by differently parameterized models, and to combine them. Different hydrological models have varying performance in reproducing catchment response. Generally it cannot be represented precisely in different segments of the hydrograph: some models performed well in simulating the peak flows, while others do well in capturing the low flows. Better performance can be achieved if a model being applied to the catchment using different model parameters that are calibrated using criteria favoring high or low flows. In this work we use a modular approach to simulate hydrology of a catchment, wherein multiple models are applied to replicate the catchment responses and each "specialist" model is calibrated according to a specific objective function which is chosen in a way that forces the model to capture certain aspects of the hydrograph, and outputs of models are combined using so-called "fuzzy committee". Such multi-model approach has been already previously implemented in the development of data driven and conceptual models (Fenicia et al., 2007), but its perfomance was considered only during the calibration period. In this study we tested an application to conceptual models in both calibration and verification period. In addition, we tested the sensitivity of the result to the use of different weightings used in the objective functions formulations, and memberbship functions used in the committee. The study was carried out for Bagamati catchment in Nepal and Brue catchment in United Kingdoms with the MATLAB-based implementation of HBV model. Multi-objective evolutionary optimization genetic algorithm (Deb, 2001) was used to
An overview of the optimization modelling applications
NASA Astrophysics Data System (ADS)
Singh, Ajay
2012-10-01
SummaryThe optimal use of available resources is of paramount importance in the backdrop of the increasing food, fiber, and other demands of the burgeoning global population and the shrinking resources. The optimal use of these resources can be determined by employing an optimization technique. The comprehensive reviews on the use of various programming techniques for the solution of different optimization problems have been provided in this paper. The past reviews are grouped into nine sections based on the solutions of the theme-based real world problems. The sections include: use of optimization modelling for conjunctive use planning, groundwater management, seawater intrusion management, irrigation management, achieving optimal cropping pattern, management of reservoir systems operation, management of resources in arid and semi-arid regions, solid waste management, and miscellaneous uses which comprise, managing problems of hydropower generation and sugar industry. Conclusions are drawn where gaps exist and more research needs to be focused.
An optimization model of communications satellite planning
NASA Astrophysics Data System (ADS)
Dutta, Amitava; Rama, Dasaratha V.
1992-09-01
A mathematical planning model is developed to help make cost effective decisions on key physical and operational parameters, for a satellite intended to provide customer premises services (CPS). The major characteristics of the model are: (1) interactions and tradeoffs among technical variables are formally captured; (2) values for capacity and operational parameters are obtained through optimization, greatly reducing the need for heuristic choices of parameter values; (3) effects of physical and regulatory constraints are included; and (4) the effects of market prices for transmission capacity on planning variables are explicitly captured. The model is solved optimally using geometric programming methods. Sensitivity analysis yields coefficients, analogous to shadow prices, that quantitatively indicate the change in objective function value resulting from variations in input parameter values. This helps in determining the robustness of planning decisions and in coping with some of the uncertainty that exists at the planning stage. The model can therefore be useful in making economically viable planning decisions for communications satellites.
Model-based optimization of ultrasonic transducers.
Heikkola, Erkki; Laitinen, Mika
2005-01-01
Numerical simulation and automated optimization of Langevin-type ultrasonic transducers are investigated. These kind of transducers are standard components in various applications of high-power ultrasonics such as ultrasonic cleaning and chemical processing. Vibration of the transducer is simulated numerically by standard finite element method and the dimensions and shape parameters of a transducer are optimized with respect to different criteria. The novelty value of this work is the combination of the simulation model and the optimization problem by efficient automatic differentiation techniques. The capabilities of this approach are demonstrated with practical test cases in which various aspects of the operation of a transducer are improved. PMID:15474952
Model test optimization using the virtual environment for test optimization
Klenke, S.E.; Reese, G.M.; Schoof, L.A.; Shierling, C.
1995-11-01
We present a software environment integrating analysis and test-based models to support optimal modal test design through a Virtual Environment for Test Optimization (VETO). The VETO assists analysis and test engineers to maximize the value of each modal test. It is particularly advantageous for structural dynamics model reconciliation applications. The VETO enables an engineer to interact with a finite element model of a test object to optimally place sensors and exciters and to investigate the selection of data acquisition parameters needed to conduct a complete modal survey. Additionally, the user can evaluate the use of different types of instrumentation such as filters, amplifiers and transducers for which models are available in the VETO. The dynamic response of most of the virtual instruments (including the device under test) are modeled in the state space domain. Design of modal excitation levels and appropriate test instrumentation are facilitated by the VETO`s ability to simulate such features as unmeasured external inputs, A/D quantization effects, and electronic noise. Measures of the quality of the experimental design, including the Modal Assurance Criterion, and the Normal Mode Indicator Function are available. The VETO also integrates tools such as Effective Independence and minamac to assist in selection of optimal sensor locations. The software is designed about three distinct modules: (1) a main controller and GUI written in C++, (2) a visualization model, taken from FEAVR, running under AVS, and (3) a state space model and time integration module built in SIMULINK. These modules are designed to run as separate processes on interconnected machines.
Improving Vortex Models via Optimal Control Theory
NASA Astrophysics Data System (ADS)
Hemati, Maziar; Eldredge, Jeff; Speyer, Jason
2012-11-01
Flapping wing kinematics, common in biological flight, can allow for agile flight maneuvers. On the other hand, we currently lack sufficiently accurate low-order models that enable such agility in man-made micro air vehicles. Low-order point vortex models have had reasonable success in predicting the qualitative behavior of the aerodynamic forces resulting from such maneuvers. However, these models tend to over-predict the force response when compared to experiments and high-fidelity simulations, in part because they neglect small excursions of separation from the wing's edges. In the present study, we formulate a constrained minimization problem which allows us to relax the usual edge regularity conditions in favor of empirical determination of vortex strengths. The optimal vortex strengths are determined by minimizing the error with respect to empirical force data, while the vortex positions are constrained to evolve according to the impulse matching model developed in previous work. We consider a flat plate undergoing various canonical maneuvers. The optimized model leads to force predictions remarkably close to the empirical data. Additionally, we compare the optimized and original models in an effort to distill appropriate edge conditions for unsteady maneuvers.
Modeling the dynamics of ant colony optimization.
Merkle, Daniel; Middendorf, Martin
2002-01-01
The dynamics of Ant Colony Optimization (ACO) algorithms is studied using a deterministic model that assumes an average expected behavior of the algorithms. The ACO optimization metaheuristic is an iterative approach, where in every iteration, artificial ants construct solutions randomly but guided by pheromone information stemming from former ants that found good solutions. The behavior of ACO algorithms and the ACO model are analyzed for certain types of permutation problems. It is shown analytically that the decisions of an ant are influenced in an intriguing way by the use of the pheromone information and the properties of the pheromone matrix. This explains why ACO algorithms can show a complex dynamic behavior even when there is only one ant per iteration and no competition occurs. The ACO model is used to describe the algorithm behavior as a combination of situations with different degrees of competition between the ants. This helps to better understand the dynamics of the algorithm when there are several ants per iteration as is always the case when using ACO algorithms for optimization. Simulations are done to compare the behavior of the ACO model with the ACO algorithm. Results show that the deterministic model describes essential features of the dynamics of ACO algorithms quite accurately, while other aspects of the algorithms behavior cannot be found in the model. PMID:12227995
Modeling optimal mineral nutrition for hazelnut micropropagation
Technology Transfer Automated Retrieval System (TEKTRAN)
Micropropagation of hazelnut (Corylus avellana L.) is typically difficult due to the wide variation in response among cultivars. This study was designed to overcome that difficulty by modeling the optimal mineral nutrients for micropropagation of C. avellana selections using a response surface desig...
Generalized mathematical models in design optimization
NASA Technical Reports Server (NTRS)
Papalambros, Panos Y.; Rao, J. R. Jagannatha
1989-01-01
The theory of optimality conditions of extremal problems can be extended to problems continuously deformed by an input vector. The connection between the sensitivity, well-posedness, stability and approximation of optimization problems is steadily emerging. The authors believe that the important realization here is that the underlying basis of all such work is still the study of point-to-set maps and of small perturbations, yet what has been identified previously as being just related to solution procedures is now being extended to study modeling itself in its own right. Many important studies related to the theoretical issues of parametric programming and large deformation in nonlinear programming have been reported in the last few years, and the challenge now seems to be in devising effective computational tools for solving these generalized design optimization models.
Optimal Experimental Design for Model Discrimination
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983
Toward "optimal" integration of terrestrial biosphere models
NASA Astrophysics Data System (ADS)
Schwalm, Christopher R.; Huntzinger, Deborah N.; Fisher, Joshua B.; Michalak, Anna M.; Bowman, Kevin; Ciais, Philippe; Cook, Robert; El-Masri, Bassil; Hayes, Daniel; Huang, Maoyi; Ito, Akihiko; Jain, Atul; King, Anthony W.; Lei, Huimin; Liu, Junjie; Lu, Chaoqun; Mao, Jiafu; Peng, Shushi; Poulter, Benjamin; Ricciuto, Daniel; Schaefer, Kevin; Shi, Xiaoying; Tao, Bo; Tian, Hanqin; Wang, Weile; Wei, Yaxing; Yang, Jia; Zeng, Ning
2015-06-01
Multimodel ensembles (MME) are commonplace in Earth system modeling. Here we perform MME integration using a 10-member ensemble of terrestrial biosphere models (TBMs) from the Multiscale synthesis and Terrestrial Model Intercomparison Project (MsTMIP). We contrast optimal (skill based for present-day carbon cycling) versus naïve ("one model-one vote") integration. MsTMIP optimal and naïve mean land sink strength estimates (-1.16 versus -1.15 Pg C per annum respectively) are statistically indistinguishable. This holds also for grid cell values and extends to gross uptake, biomass, and net ecosystem productivity. TBM skill is similarly indistinguishable. The added complexity of skill-based integration does not materially change MME values. This suggests that carbon metabolism has predictability limits and/or that all models and references are misspecified. Resolving this issue requires addressing specific uncertainty types (initial conditions, structure, and references) and a change in model development paradigms currently dominant in the TBM community.
Optimal Empirical Prognostic Models of Climate Dynamics
NASA Astrophysics Data System (ADS)
Loskutov, E. M.; Mukhin, D.; Gavrilov, A.; Feigin, A. M.
2014-12-01
In this report the empirical methodology for prediction of climate dynamics is suggested. We construct the dynamical models of data patterns connected with climate indices, from observed spatially distributed time series. The models are based on artificial neural network (ANN) parameterization and have a form of discrete stochastic evolution operator mapping some sequence of systems state on the next one [1]. Different approaches to reconstruction of empirical basis (phase variables) for system's phase space representation, which is appropriate for forecasting the climate index of interest, are discussed in the report; for this purpose both linear and non-linear data expansions are considered. The most important point of the methodology is finding the optimal structural parameters of the model such as dimension of variable vector, i.e. number of principal components used for modeling, the time lag used for prediction, and number of neurons in ANN determining the quality of approximation. Actually, we need to solve the model selection problem, i.e. we want to obtain a model of optimal complexity in relation to analyzed time series. We use MDL approach [2] for this purpose: the model providing best data compression is chosen. The method is applied to space-distributed time-series of sea surface temperature and sea level pressure taken from IRI datasets [3]: the ability of proposed models to predict different climate indices (incl. Multivariate ENSO index, Pacific Decadal Oscillation index, North-Atlantic Oscillation index) is investigated. References:1. Molkov Ya. I., E. M. Loskutov, D. N. Mukhin, and A. M. Feigin, Random dynamical models from time series. Phys. Rev. E, 85, 036216, 2012.2. Molkov, Ya.I., D.N. Mukhin, E.M. Loskutov, A.M. Feigin, and G.A. Fidelin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series. Phys. Rev. E, 80, 046207, 2009.3. IRI/LDEO Climate Data Library (http://iridl.ldeo.columbia.edu/)
Optimal hierarchies for fuzzy object models
NASA Astrophysics Data System (ADS)
Matsumoto, Monica M. S.; Udupa, Jayaram K.
2013-03-01
In radiologic clinical practice, the analysis underlying image examinations are qualitative, descriptive, and to some extent subjective. Quantitative radiology (QR) is valuable in clinical radiology. Computerized automatic anatomy recognition (AAR) is an essential step toward that goal. AAR is a body-wide organ recognition strategy. The AAR framework is based on fuzzy object models (FOMs) wherein the models for the different objects are encoded in a hierarchy. We investigated ways of optimally designing the hierarchy tree while building the models. The hierarchy among the objects is a core concept of AAR. The parent-offspring relationships have two main purposes in this context: (i) to bring into AAR more understanding and knowledge about the form, geography, and relationships among objects, and (ii) to foster guidance to object recognition and object delineation. In this approach, the relationship among objects is represented by a graph, where the vertices are the objects (organs) and the edges connect all pairs of vertices into a complete graph. Each pair of objects is assigned a weight described by the spatial distance between them, their intensity profile differences, and their correlation in size, all estimated over a population. The optimal hierarchy tree is obtained by the shortest-path algorithm as an optimal spanning tree. To evaluate the optimal hierarchies, we have performed some preliminary tests involving the subsequent recognition step. The body region used for initial investigation was the thorax.
Probabilistic computer model of optimal runway turnoffs
NASA Technical Reports Server (NTRS)
Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.
1985-01-01
Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.
Global optimization of bilinear engineering design models
Grossmann, I.; Quesada, I.
1994-12-31
Recently Quesada and Grossmann have proposed a global optimization algorithm for solving NLP problems involving linear fractional and bilinear terms. This model has been motivated by a number of applications in process design. The proposed method relies on the derivation of a convex NLP underestimator problem that is used within a spatial branch and bound search. This paper explores the use of alternative bounding approximations for constructing the underestimator problem. These are applied in the global optimization of problems arising in different engineering areas and for which different relaxations are proposed depending on the mathematical structure of the models. These relaxations include linear and nonlinear underestimator problems. Reformulations that generate additional estimator functions are also employed. Examples from process design, structural design, portfolio investment and layout design are presented.
Optimized Null Model for Protein Structure Networks
Lappe, Michael; Pržulj, Nataša
2009-01-01
Much attention has recently been given to the statistical significance of topological features observed in biological networks. Here, we consider residue interaction graphs (RIGs) as network representations of protein structures with residues as nodes and inter-residue interactions as edges. Degree-preserving randomized models have been widely used for this purpose in biomolecular networks. However, such a single summary statistic of a network may not be detailed enough to capture the complex topological characteristics of protein structures and their network counterparts. Here, we investigate a variety of topological properties of RIGs to find a well fitting network null model for them. The RIGs are derived from a structurally diverse protein data set at various distance cut-offs and for different groups of interacting atoms. We compare the network structure of RIGs to several random graph models. We show that 3-dimensional geometric random graphs, that model spatial relationships between objects, provide the best fit to RIGs. We investigate the relationship between the strength of the fit and various protein structural features. We show that the fit depends on protein size, structural class, and thermostability, but not on quaternary structure. We apply our model to the identification of significantly over-represented structural building blocks, i.e., network motifs, in protein structure networks. As expected, choosing geometric graphs as a null model results in the most specific identification of motifs. Our geometric random graph model may facilitate further graph-based studies of protein conformation space and have important implications for protein structure comparison and prediction. The choice of a well-fitting null model is crucial for finding structural motifs that play an important role in protein folding, stability and function. To our knowledge, this is the first study that addresses the challenge of finding an optimized null model for RIGs, by
Modeling and Global Optimization of DNA separation
Fahrenkopf, Max A.; Ydstie, B. Erik; Mukherjee, Tamal; Schneider, James W.
2014-01-01
We develop a non-convex non-linear programming problem that determines the minimum run time to resolve different lengths of DNA using a gel-free micelle end-labeled free solution electrophoresis separation method. Our optimization framework allows for efficient determination of the utility of different DNA separation platforms and enables the identification of the optimal operating conditions for these DNA separation devices. The non-linear programming problem requires a model for signal spacing and signal width, which is known for many DNA separation methods. As a case study, we show how our approach is used to determine the optimal run conditions for micelle end-labeled free-solution electrophoresis and examine the trade-offs between a single capillary system and a parallel capillary system. Parallel capillaries are shown to only be beneficial for DNA lengths above 230 bases using a polydisperse micelle end-label otherwise single capillaries produce faster separations. PMID:24764606
Modeling and Global Optimization of DNA separation.
Fahrenkopf, Max A; Ydstie, B Erik; Mukherjee, Tamal; Schneider, James W
2014-05-01
We develop a non-convex non-linear programming problem that determines the minimum run time to resolve different lengths of DNA using a gel-free micelle end-labeled free solution electrophoresis separation method. Our optimization framework allows for efficient determination of the utility of different DNA separation platforms and enables the identification of the optimal operating conditions for these DNA separation devices. The non-linear programming problem requires a model for signal spacing and signal width, which is known for many DNA separation methods. As a case study, we show how our approach is used to determine the optimal run conditions for micelle end-labeled free-solution electrophoresis and examine the trade-offs between a single capillary system and a parallel capillary system. Parallel capillaries are shown to only be beneficial for DNA lengths above 230 bases using a polydisperse micelle end-label otherwise single capillaries produce faster separations. PMID:24764606
Optimizing electroslag cladding with finite element modeling
Li, M.V.; Atteridge, D.G.; Meekisho, L.
1996-12-31
Electroslag cladding of nickel alloys onto carbon steel propeller shafts was optimized in terms of interpass temperatures. A two dimensional finite element model was used in this study to analyze the heat transfer induced by multipass electroslag cladding. Changes of interpass temperatures during a cladding experiment with uniform initial temperature distribution on a section of shaft were first simulated. It was concluded that uniform initial temperature distribution would lead to interpass temperatures out of the optimal range if continuous cladding is expected. The difference in the cooling conditions among experimental and full size shafts and its impact on interpass temperatures during the cladding were discussed. Electroslag cladding onto a much longer shaft, virtually an semi infinite long shaft, was analyzed with specific reference to the practical applications of electroslag cladding. Optimal initial preheating temperature distribution was obtained for continuous cladding on full size shafts which would keep the interpass temperatures within the required range.
Combined optimization model for sustainable energization strategy
NASA Astrophysics Data System (ADS)
Abtew, Mohammed Seid
Access to energy is a foundation to establish a positive impact on multiple aspects of human development. Both developed and developing countries have a common concern of achieving a sustainable energy supply to fuel economic growth and improve the quality of life with minimal environmental impacts. The Least Developing Countries (LDCs), however, have different economic, social, and energy systems. Prevalence of power outage, lack of access to electricity, structural dissimilarity between rural and urban regions, and traditional fuel dominance for cooking and the resultant health and environmental hazards are some of the distinguishing characteristics of these nations. Most energy planning models have been designed for developed countries' socio-economic demographics and have missed the opportunity to address special features of the poor countries. An improved mixed-integer programming energy-source optimization model is developed to address limitations associated with using current energy optimization models for LDCs, tackle development of the sustainable energization strategies, and ensure diversification and risk management provisions in the selected energy mix. The Model predicted a shift from traditional fuels reliant and weather vulnerable energy source mix to a least cost and reliable modern clean energy sources portfolio, a climb on the energy ladder, and scored multifaceted economic, social, and environmental benefits. At the same time, it represented a transition strategy that evolves to increasingly cleaner energy technologies with growth as opposed to an expensive solution that leapfrogs immediately to the cleanest possible, overreaching technologies.
ITER central solenoid model coil impregnation optimization
NASA Astrophysics Data System (ADS)
Schutz, J. B.; Munshi, N. A.; Smith, K. B.
The success of the vacuum-pressure impregnation of the International Thermonuclear Experimental Reactor central solenoid is critical to success of the magnet system. Analysis of fluid flow through a fabric bed is extremely complicated, and complete analytical solutions are not available, but semiempirical methods can be adapted to model these flows. Several of these models were evaluated to predict the impregnation characteristics of a liquid resin through a mat of reinforcing glass fabric, and an experiment was performed to validate these models. The effects of applied pressure differential, glass fibre volume fraction, resin viscosity and impregnation time were examined analytically. From the results of this optimization, it is apparent that use of elevated processing temperature resin systems offer significant advantages in large scale impregnation due to their lower viscosity and longer working life, and they may be essential for large scale impregnations.
Centerline optimization using vessel quantification model
NASA Astrophysics Data System (ADS)
Cai, Wenli; Dachille, Frank; Meissner, Michael
2005-04-01
An accurate and reproducible centerline is needed in many vascular applications, such as virtual angioscopy, vessel quantification, and surgery planning. This paper presents a progressive optimization algorithm to refine a centerline after it is extracted. A new centerline model definition is proposed that allows quantifiable minimum cross-sectional area. A centerline is divided into a number of segments. Each segment corresponds to a local generalized cylinder. A reference frame (cross-section) is set up at the center point of each cylinder. The position and the orientation of the cross-section are optimized within each cylinder by finding the minimum cross-sectional area. All local-optimized center points are approximated by a NURBS curve globally, and the curve is re-sampled to the refined set of center points. This refinement iteration, local optimization plus global approximation, converges to the optimal centerline, yielding a smooth and accurate central axis curve. The application discussed in this paper is vessel quantification and virtual angioscopy. However, the algorithm is a general centerline refinement method that can be applied to other applications that need accurate and reproducible centerlines.
Parameter optimization in S-system models
Vilela, Marco; Chou, I-Chun; Vinga, Susana; Vasconcelos, Ana Tereza R; Voit, Eberhard O; Almeida, Jonas S
2008-01-01
Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well. PMID:18416837
Modeling, Analysis, and Optimization Issues for Large Space Structures
NASA Technical Reports Server (NTRS)
Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)
1983-01-01
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.
Extremal Optimization for p-Spin Models
NASA Astrophysics Data System (ADS)
Falkner, Stefan; Boettcher, Stefan
2012-02-01
It was shown recently that finding ground states in the 3-spin model on a 2d dimensional triangular lattice poses an NP-hard problem [1]. We use the extremal optimization (EO) heuristic [2] to explore ground state energies and finite-size scaling corrections [3]. EO predicts the thermodynamic ground state energy with high accuracy, based on the observation that finite size corrections appear to decay purely with system size. Just as found in 3-spin models on r-regular graphs, there are no noticeable anomalous corrections to these energies. Interestingly, the results are sufficiently accurate to detect alternating patters in the energies when the lattice size L is divisible by 6. Although ground states seem very prolific and might seem easy to obtain with simple greedy algorithms, our tests show significant improvement in the data with EO. [4pt] [1] PRE 83 (2011) 046709,[2] PRL 86 (2001) 5211,[3] S. Boettcher and S. Falkner (in preparation).
Optimal evolution models for quantum tomography
NASA Astrophysics Data System (ADS)
Czerwiński, Artur
2016-02-01
The research presented in this article concerns the stroboscopic approach to quantum tomography, which is an area of science where quantum physics and linear algebra overlap. In this article we introduce the algebraic structure of the parametric-dependent quantum channels for 2-level and 3-level systems such that the generator of evolution corresponding with the Kraus operators has no degenerate eigenvalues. In such cases the index of cyclicity of the generator is equal to 1, which physically means that there exists one observable the measurement of which performed a sufficient number of times at distinct instants provides enough data to reconstruct the initial density matrix and, consequently, the trajectory of the state. The necessary conditions for the parameters and relations between them are introduced. The results presented in this paper seem to have considerable potential applications in experiments due to the fact that one can perform quantum tomography by conducting only one kind of measurement. Therefore, the analyzed evolution models can be considered optimal in the context of quantum tomography. Finally, we introduce some remarks concerning optimal evolution models in the case of n-dimensional Hilbert space.
Designing Sensor Networks by a Generalized Highly Optimized Tolerance Model
NASA Astrophysics Data System (ADS)
Miyano, Takaya; Yamakoshi, Miyuki; Higashino, Sadanori; Tsutsui, Takako
A variant of the highly optimized tolerance model is applied to a toy problem of bioterrorism to determine the optimal arrangement of hypothetical bio-sensors to avert epidemic outbreak. Nonlinear loss function is utilized in searching the optimal structure of the sensor network. The proposed method successfully averts disastrously large events, which can not be achieved by the original highly optimized tolerance model.
Application of simulation models for the optimization of business processes
NASA Astrophysics Data System (ADS)
Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří
2016-06-01
The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.
Model Identification for Optimal Diesel Emissions Control
Stevens, Andrew J.; Sun, Yannan; Song, Xiaobo; Parker, Gordon
2013-06-20
In this paper we develop a model based con- troller for diesel emission reduction using system identification methods. Specifically, our method minimizes the downstream readings from a production NOx sensor while injecting a minimal amount of urea upstream. Based on the linear quadratic estimator we derive the closed form solution to a cost function that accounts for the case some of the system inputs are not controllable. Our cost function can also be tuned to trade-off between input usage and output optimization. Our approach performs better than a production controller in simulation. Our NOx conversion efficiency was 92.7% while the production controller achieved 92.4%. For NH3 conversion, our efficiency was 98.7% compared to 88.5% for the production controller.
Optimization approaches to nonlinear model predictive control
Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)
1991-01-01
With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.
Optimized Markov state models for metastable systems
NASA Astrophysics Data System (ADS)
Guarnera, Enrico; Vanden-Eijnden, Eric
2016-07-01
A method is proposed to identify target states that optimize a metastability index amongst a set of trial states and use these target states as milestones (or core sets) to build Markov State Models (MSMs). If the optimized metastability index is small, this automatically guarantees the accuracy of the MSM, in the sense that the transitions between the target milestones is indeed approximately Markovian. The method is simple to implement and use, it does not require that the dynamics on the trial milestones be Markovian, and it also offers the possibility to partition the system's state-space by assigning every trial milestone to the target milestones it is most likely to visit next and to identify transition state regions. Here the method is tested on the Gly-Ala-Gly peptide, where it is shown to correctly identify the expected metastable states in the dihedral angle space of the molecule without a priori information about these states. It is also applied to analyze the folding landscape of the Beta3s mini-protein, where it is shown to identify the folded basin as a connecting hub between an helix-rich region, which is entropically stabilized, and a beta-rich region, which is energetically stabilized and acts as a kinetic trap.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.
Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs
NASA Technical Reports Server (NTRS)
Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.
1998-01-01
This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Integrative systems modeling and multi-objective optimization
This presentation presents a number of algorithms, tools, and methods for utilizing multi-objective optimization within integrated systems modeling frameworks. We first present innovative methods using a genetic algorithm to optimally calibrate the VELMA and SWAT ecohydrological ...
Quantitative Modeling and Optimization of Magnetic Tweezers
Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.
2009-01-01
Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664
Optimal estimator model for human spatial orientation
NASA Technical Reports Server (NTRS)
Borah, J.; Young, L. R.; Curry, R. E.
1979-01-01
A model is being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli. Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors. Central nervous system function is then modeled as a steady-state Kalman filter which blends information from the various sensors to form an estimate of spatial orientation. Where necessary, this linear central estimator has been augmented with nonlinear elements to reflect more accurately some highly nonlinear human response characteristics. Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended. Possible means are described for extending the model to better represent the active pilot with varying skill and work load levels.
A MILP-Model for the Optimization of Transports
NASA Astrophysics Data System (ADS)
Björk, Kaj-Mikael
2010-09-01
This paper presents a work in developing a mathematical model for the optimization of transports. The decisions to be made are routing decisions, truck assignment and the determination of the pickup order for a set of loads and available trucks. The model presented takes these aspects into account simultaneously. The MILP model is implemented in the Microsoft Excel environment, utilizing the LP-solve freeware as the optimization engine and Visual Basic for Applications as the modeling interface.
Optimal Scaling of Interaction Effects in Generalized Linear Models
ERIC Educational Resources Information Center
van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.
2009-01-01
Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…
On Optimal Input Design and Model Selection for Communication Channels
Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M
2013-01-01
In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
Stochastic Robust Mathematical Programming Model for Power System Optimization
Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Optimal Model Discovery of Periodic Variable Stars
NASA Astrophysics Data System (ADS)
Bellinger, Earl Patrick; Kanbur, Shashi; Wysocki, Daniel
2015-01-01
Precision modeling of periodic variable stars is important for various pursuits such as establishing the extragalactic distance scale and measuring the Hubble constant. Many difficulties exist, however, when attempting to model the light curves of these objects, as photometric observations of variable stars tend to be noisy and sparsely sampled. As a consequence, existing methods commonly fail to produce models that accurately describe their light curves. In this talk, I introduce a new machine learning approach for modeling light curves of periodic variables that is robust to the presence of these effects. I demonstrate this method on fifty thousand Cepehid and RR Lyrae variable stars in the galaxy as well as the Magellanic Clouds and show that it significantly outperforms existing methods.
A new algorithm for L2 optimal model reduction
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Milman, M. H.; Mingori, D. L.
1992-01-01
In this paper the quadratically optimal model reduction problem for single-input, single-output systems is considered. The reduced order model is determined by minimizing the integral of the magnitude-squared of the transfer function error. It is shown that the numerator coefficients of the optimal approximant satisfy a weighted least squares problem and, on this basis, a two-step iterative algorithm is developed combining a least squares solver with a gradient minimizer. Convergence of the proposed algorithm to stationary values of the quadratic cost function is proved. The formulation is extended to handle the frequency-weighted optimal model reduction problem. Three examples demonstrate the optimization algorithm.
Mathematical Model For Engineering Analysis And Optimization
NASA Technical Reports Server (NTRS)
Sobieski, Jaroslaw
1992-01-01
Computational support for engineering design process reveals behavior of designed system in response to external stimuli; and finds out how behavior modified by changing physical attributes of system. System-sensitivity analysis combined with extrapolation forms model of design complementary to model of behavior, capable of direct simulation of effects of changes in design variables. Algorithms developed for this method applicable to design of large engineering systems, especially those consisting of several subsystems involving many disciplines.
An optimization strategy for a biokinetic model of inhaled radionuclides
Shyr, L.J.; Griffith, W.C.; Boecker, B.B. )
1991-04-01
Models for material disposition and dosimetry involve predictions of the biokinetics of the material among compartments representing organs and tissues in the body. Because of a lack of human data for most toxicants, many of the basic data are derived by modeling the results obtained from studies using laboratory animals. Such a biomathematical model is usually developed by adjusting the model parameters to make the model predictions match the measured retention and excretion data visually. The fitting process can be very time-consuming for a complicated model, and visual model selections may be subjective and easily biased by the scale or the data used. Due to the development of computerized optimization methods, manual fitting could benefit from an automated process. However, for a complicated model, an automated process without an optimization strategy will not be efficient, and may not produce fruitful results. In this paper, procedures for, and implementation of, an optimization strategy for a complicated mathematical model is demonstrated by optimizing a biokinetic model for 144Ce in fused aluminosilicate particles inhaled by beagle dogs. The optimized results using SimuSolv were compared to manual fitting results obtained previously using the model simulation software GASP. Also, statistical criteria provided by SimuSolv, such as likelihood function values, were used to help or verify visual model selections.
Optimal control of a delayed SLBS computer virus model
NASA Astrophysics Data System (ADS)
Chen, Lijuan; Hattaf, Khalid; Sun, Jitao
2015-06-01
In this paper, a delayed SLBS computer virus model is firstly proposed. To the best of our knowledge, this is the first time to discuss the optimal control of the SLBS model. By using the optimal control strategy, we present an optimal strategy to minimize the total number of the breakingout computers and the cost associated with toxication or detoxication. We show that an optimal control solution exists for the control problem. Some examples are presented to show the efficiency of this optimal control.
Hierarchical models and iterative optimization of hybrid systems
NASA Astrophysics Data System (ADS)
Rasina, Irina V.; Baturina, Olga V.; Nasatueva, Soelma N.
2016-06-01
A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.
Multipurpose optimization models for high level waste vitrification
Hoza, M.
1994-08-01
Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification.
Optimization modeling for industrial waste reduction planning
Roberge, H.D.; Baetz, B.W. . Dept. of Civil Engineering)
1994-01-01
A model is developed for planning the implementation of industrial waste reduction and waste management strategies. The model is based on minimizing the overall cost of waste reduction and waste management for an industrial facility over a certain time period. The problem is formulated as a general mixed integer linear programming (MILP) problem, where the objective function includes capital and operating costs and is subject to a number of constraints that define the system under consideration. The information required to use the modeling approach includes the capital and operating costs of the various options being considered, discount rates, escalation factors, the capacity limitations on various options for waste treatment, disposal and management, as well as treatment efficiencies and the potential for waste reduction. The general modeling approach is applied to a case study facility. The MILP formulation was solved using a commercially available software package. The model could be used by an environmental engineer or a planner in an industry that is conserving implementing waste reduction projects. Ideally, the industry would have generated information on modifications that could reduce their waste generation, as well as information on their current waste management practices. In the event that specific waste reduction projects have not been identified, the economic feasibility of potential future projects could be determined.
Geomagnetic field modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
1980-01-01
Five individual 5 year mini-batch geomagnetic models were generated and two computer programs were developed to process the models. The first program computes statistics (mean sigma, weighted sigma) on the changes in the first derivatives (linear terms) of the spherical harmonic coefficients between mini-batches. The program ran successfully. The statistics are intended for use in computing the state noise matrix required in the information filter. The second program is the information filter. Most subroutines used in the filter were tested, but the coefficient statistics must be analyzed before the filter is run.
Optimizing glassy p-spin models
NASA Astrophysics Data System (ADS)
Thomas, Creightonk.; Katzgraber, Helmutg.
2011-04-01
Computing the ground state of Ising spin-glass models with p-spin interactions is, in general, an NP-hard problem. In this work we show that unlike in the case of the standard Ising spin glass with two-spin interactions, computing ground states with p=3 is an NP-hard problem even in two space dimensions. Furthermore, we present generic exact and heuristic algorithms for finding ground states of p-spin models with high confidence for systems of up to several thousand spins.
COBRA-SFS modifications and cask model optimization
Rector, D.R.; Michener, T.E.
1989-01-01
Spent-fuel storage systems are complex systems and developing a computational model for one can be a difficult task. The COBRA-SFS computer code provides many capabilities for modeling the details of these systems, but these capabilities can also allow users to specify a more complex model than necessary. This report provides important guidance to users that dramatically reduces the size of the model while maintaining the accuracy of the calculation. A series of model optimization studies was performed, based on the TN-24P spent-fuel storage cask, to determine the optimal model geometry. Expanded modeling capabilities of the code are also described. These include adding fluid shear stress terms and a detailed plenum model. The mathematical models for each code modification are described, along with the associated verification results. 22 refs., 107 figs., 7 tabs.
Multi-objective parameter optimization of common land model using adaptive surrogate modelling
NASA Astrophysics Data System (ADS)
Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.
2014-06-01
Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20-100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) regional LSMs are expensive to run, while conventional multi-objective optimization methods needs a huge number of model runs (typically 105~106). It makes parameter optimization computationally prohibitive. An uncertainty qualification framework was developed to meet the aforementioned challenges: (1) use parameter screening to reduce the number of adjustable parameters; (2) use surrogate models to emulate the response of dynamic models to the variation of adjustable parameters; (3) use an adaptive strategy to promote the efficiency of surrogate modeling based optimization; (4) use a weighting function to transfer multi-objective optimization to single objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column case study of a land surface model - Common Land Model (CoLM) and evaluate the effectiveness and efficiency of the proposed framework. The result indicated that this framework can achieve optimal parameter set using totally 411 model runs, and worth to be extended to other large complex dynamic models, such as regional land surface models, atmospheric models and climate models.
Optimal Experimental Design for Model Discrimination
ERIC Educational Resources Information Center
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…
An integrated model for optimizing weld quality
Zacharia, T.; Radhakrishnan, B.; Paul, A.J.; Cheng, C.
1995-06-01
Welding has evolved in the last few decades from almost an empirical art to an activity embodying the most advanced tools of, various basic and applied sciences. Significant progress has been made in understanding the welding process and welded materials. The improved knowledge base has been useful in automation and process control. In view of the large number of variables involved, creating an adequately large database to understand and control the welding process is expensive and time consuming, if not impractical. A recourse is to simulate welding processes through a set of mathematical equations representing the essential physical processes of welding. Results obtained from the phenomenological models depend crucially on the quality of the physical relations in the models and the trustworthiness of input data. In this paper, recent advances in the mathematical modeling of fundamental phenomena in welds are summarized. State of the art mathematical models, advances in computational techniques, emerging high performance computers, and experimental validation techniques have provided significant insight into the fundamental factors that control the development of the weldment. Current status and scientific issues in heat and fluid flow in welds, heat source metal interaction, and solidification microstructure are assessed. Future research areas of major importance for understanding the fundamental phenomena in weld behavior are identified.
Optimal model reduction and frequency-weighted extension
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Milman, M. H.; Mingori, D. L.
1990-01-01
In this paper the quadratically optimal model reduction problem for single-input, single-output systems is considered. The reduced order model is determined by minimizing the integral of the magnitude-squared of the transfer function error. It is shown that the numerator coefficients of the optimal approximant satisfy a weighted least squares problem and, on this basis, a two-step iterative algorithm is developed combining a least squares solver with a gradient minimizer. The existence of globally optimal stable solutions to the optimization problem is established, and convergence of the algorithm to stationary values of the cost function is proved. The formulation is extended to handle the frequency-weighted optimal model reduction problem. Three examples demonstrate the optimization algorithm.
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models.
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
Nonlinear model predictive control based on collective neurodynamic optimization.
Yan, Zheng; Wang, Jun
2015-04-01
In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach. PMID:25608315
Optimal Complexity of Nonlinear Rainfall-Runoff Models
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J.; van de Giesen, N.; Fenicia, F.
2008-12-01
Identification of an appropriate level of model complexity to accurately translate rainfall into runoff remains an unresolved issue. The model has to be complex enough to generate accurate predictions, but not too complex such that its parameters cannot be reliably estimated from the data. Earlier work with linear models (Jakeman and Hornberger, 1993) concluded that a model with 4 to 5 parameters is sufficient. However, more recent results with a nonlinear model (Vrugt et al., 2006) suggest that 10 or more parameters may be identified from daily rainfall-runoff time-series. The goal here is to systematically investigate optimal complexity of nonlinear rainfall-runoff models, yielding accurate models with identifiable parameters. Our methodology consists of four steps: (i) a priori specification of a family of model structures from which to pick an optimal one, (ii) parameter optimization of each model structure to estimate empirical or calibration error, (iii) estimation of parameter uncertainty of each calibrated model structure, and (iv) estimation of prediction error of each calibrated model structure. For the first step we formulate a flexible model structure that allows us to systematically vary the complexity with which physical processes are simulated. The second and third steps are achieved using a recently developed Markov chain Monte Carlo algorithm (DREAM), which minimizes calibration error yielding optimal parameter values and their underlying posterior probability density function. Finally, we compare several methods for estimating prediction error of each model structure, including statistical methods based on information criteria and split-sample calibration-validation. Estimates of parameter uncertainty and prediction error are then used to identify optimal complexity for rainfall-runoff modeling, using data from dry and wet MOPEX catchments as case studies.
Life cycle optimization of automobile replacement: model and application.
Kim, Hyung Chul; Keoleian, Gregory A; Grande, Darby E; Bean, James C
2003-12-01
Although recent progress in automotive technology has reduced exhaust emissions per mile for new cars, the continuing use of inefficient, higher-polluting old cars as well as increasing vehicle miles driven are undermining the benefits of this progress. As a way to address the "inefficient old vehicle" contribution to this problem, a novel life cycle optimization (LCO) model is introduced and applied to the automobile replacement policy question. The LCO model determines optimal vehicle lifetimes, accounting for technology improvements of new models while considering deteriorating efficiencies of existing models. Life cycle inventories for different vehicle models that represent materials production, manufacturing, use, maintenance, and end-of-life environmental burdens are required as inputs to the LCO model. As a demonstration, the LCO model was applied to mid-sized passenger car models between 1985 and 2020. An optimization was conducted to minimize cumulative carbon monoxide (CO), non-methane hydrocarbon (NMHC), oxides of nitrogen (NOx), carbon dioxide (CO2), and energy use over the time horizon (1985-2020). For CO, NMHC, and NOx pollutants with 12000 mi of annual mileage, automobile lifetimes ranging from 3 to 6 yr are optimal for the 1980s and early 1990s model years while the optimal lifetimes are expected to be 7-14 yr for model year 2000s and beyond. On the other hand, a lifetime of 18 yr minimizes cumulative energy and CO2 based on driving 12000 miles annually. Optimal lifetimes are inversely correlated to annual vehicle mileage, especially for CO, NMHC, and NOx emissions. On the basis of the optimization results, policies improving durability of emission controls, retiring high-emitting vehicles, and improving fuel economies are discussed. PMID:14700326
Modeling, Instrumentation, Automation, and Optimization of Water Resource Recovery Facilities.
Sweeney, Michael W; Kabouris, John C
2016-10-01
A review of the literature published in 2015 on topics relating to water resource recovery facilities (WRRF) in the areas of modeling, automation, measurement and sensors and optimization of wastewater treatment (or water resource reclamation) is presented. PMID:27620091
First-Order Frameworks for Managing Models in Engineering Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natlia M.; Lewis, Robert Michael
2000-01-01
Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.
Research on web performance optimization principles and models
NASA Astrophysics Data System (ADS)
Wang, Xin
2013-03-01
The Internet high speed development, causes Web the optimized question to be getting more and more prominent, therefore the Web performance optimizes into inevitably. the first principle of Web Performance Optimization is to understand, to know that income will have to pay, and return is diminishing; Simultaneously the probability will decrease Web the performance, and will start from the highest level to optimize obtained biggest. Web Technical models to improve the performance are: sharing costs, high-speed caching, profiles, parallel processing, simplified treatment. Based on this study, given the crucial Web performance optimization recommendations, which improve the performance of Web usage, accelerate the efficient use of Internet has an important significance.
An optimization model of a New Zealand dairy farm.
Doole, Graeme J; Romera, Alvaro J; Adler, Alfredo A
2013-04-01
Optimization models are a key tool for the analysis of emerging policies, prices, and technologies within grazing systems. A detailed, nonlinear optimization model of a New Zealand dairy farming system is described. This framework is notable for its inclusion of pasture residual mass, pasture utilization, and intake regulation as key management decisions. Validation of the model shows that the detailed representation of key biophysical relationships in the model provides an enhanced capacity to provide reasonable predictions outside of calibrated scenarios. Moreover, the flexibility of management plans in the model enhances its stability when faced with significant perturbations. In contrast, the inherent rigidity present in a less-detailed linear programming model is shown to limit its capacity to provide reasonable predictions away from the calibrated baseline. A sample application also demonstrates how the model can be used to identify pragmatic strategies to reduce greenhouse gas emissions. PMID:23415534
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
An Optimality-Based Fully-Distributed Watershed Ecohydrological Model
NASA Astrophysics Data System (ADS)
Chen, L., Jr.
2015-12-01
Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial
Jet Pump Design Optimization by Multi-Surrogate Modeling
NASA Astrophysics Data System (ADS)
Mohan, S.; Samad, A.
2015-01-01
A basic approach to reduce the design and optimization time via surrogate modeling is to select a right type of surrogate model for a particular problem, where the model should have better accuracy and prediction capability. A multi-surrogate approach can protect a designer to select a wrong surrogate having high uncertainty in the optimal zone of the design space. Numerical analysis and optimization of a jet pump via multi-surrogate modeling have been reported in this work. Design variables including area ratio, mixing tube length to diameter ratio and setback ratio were introduced to increase the hydraulic efficiency of the jet pump. Reynolds-averaged Navier-Stokes equations were solved and responses were computed. Among different surrogate models, Sheppard function based surrogate shows better accuracy in data fitting while the radial basis neural network produced highest enhanced efficiency. The efficiency enhancement was due to the reduction of losses in the flow passage.
Portfolio optimization for index tracking modelling in Malaysia stock market
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Ismail, Hamizun
2016-06-01
Index tracking is an investment strategy in portfolio management which aims to construct an optimal portfolio to generate similar mean return with the stock market index mean return without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using the optimization model which adopts regression approach in tracking the benchmark stock market index return. In this study, the data consists of weekly price of stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2013. The results of this study show that the optimal portfolio is able to track FBMKLCI Index at minimum tracking error of 1.0027% with 0.0290% excess mean return over the mean return of FBMKLCI Index. The significance of this study is to construct the optimal portfolio using optimization model which adopts regression approach in tracking the stock market index without purchasing all index components.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Optimization of a new mathematical model for bacterial growth
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this research is to optimize a new mathematical equation as a primary model to describe the growth of bacteria under constant temperature conditions. An optimization algorithm was used in combination with a numerical (Runge-Kutta) method to solve the differential form of the new gr...
Groundwater modeling and remedial optimization design using graphical user interfaces
Deschaine, L.M.
1997-05-01
The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.
Optimizing Classroom Acoustics Using Computer Model Studies.
ERIC Educational Resources Information Center
Reich, Rebecca; Bradley, John
1998-01-01
Investigates conditions relating to the maximum useful-to-detrimental sound ratios present in classrooms and determining the optimum conditions for speech intelligibility. Reveals that speech intelligibility is more strongly influenced by ambient noise levels and that the optimal location for sound absorbing material is on a classroom's upper…
Contingency contractor optimization. Phase 3, model description and formulation.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Durfee, Justin D.; Jones, Dean A.; Martin, Nathaniel; Detry, Richard Joseph; Nanco, Alan Stewart; Nozick, Linda Karen
2013-10-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Contingency contractor optimization. phase 3, model description and formulation.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Detry, Richard Joseph; Durfee, Justin D.; Jones, Dean A.; Martin, Nathaniel; Nanco, Alan Stewart; Nozick, Linda Karen
2013-06-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Optimal vaccination and treatment of an epidemic network model
NASA Astrophysics Data System (ADS)
Chen, Lijuan; Sun, Jitao
2014-08-01
In this Letter, we firstly propose an epidemic network model incorporating two controls which are vaccination and treatment. For the constant controls, by using Lyapunov function, global stability of the disease-free equilibrium and the endemic equilibrium of the model is investigated. For the non-constant controls, by using the optimal control strategy, we discuss an optimal strategy to minimize the total number of the infected and the cost associated with vaccination and treatment. Table 1 and Figs. 1-5 are presented to show the global stability and the efficiency of this optimal control.
Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization
NASA Astrophysics Data System (ADS)
Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane
2003-01-01
The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.
Fuzzy multiobjective models for optimal operation of a hydropower system
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.
2013-06-01
Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.
Abstract models for the synthesis of optimization algorithms.
NASA Technical Reports Server (NTRS)
Meyer, G. G. L.; Polak, E.
1971-01-01
Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.
Assessment of optimized Markov models in protein fold classification.
Lampros, Christos; Simos, Thomas; Exarchos, Themis P; Exarchos, Konstantinos P; Papaloukas, Costas; Fotiadis, Dimitrios I
2014-08-01
Protein fold classification is a challenging task strongly associated with the determination of proteins' structure. In this work, we tested an optimization strategy on a Markov chain and a recently introduced Hidden Markov Model (HMM) with reduced state-space topology. The proteins with unknown structure were scored against both these models. Then the derived scores were optimized following a local optimization method. The Protein Data Bank (PDB) and the annotation of the Structural Classification of Proteins (SCOP) database were used for the evaluation of the proposed methodology. The results demonstrated that the fold classification accuracy of the optimized HMM was substantially higher compared to that of the Markov chain or the reduced state-space HMM approaches. The proposed methodology achieved an accuracy of 41.4% on fold classification, while Sequence Alignment and Modeling (SAM), which was used for comparison, reached an accuracy of 38%. PMID:25152041
AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT
NASA Astrophysics Data System (ADS)
Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi
In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.
A flow path model for regional water distribution optimization
NASA Astrophysics Data System (ADS)
Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.
2009-09-01
We develop a flow path model for the optimization of a regional water distribution system. The model simultaneously describes a water distribution system in two parts: (1) the water delivery relationship between suppliers and receivers and (2) the physical water delivery network. In the first part, the model considers waters from different suppliers as multiple commodities. This helps the model clearly describe water deliveries by identifying the relationship between suppliers and receivers. The physical part characterizes a physical water distribution network by all possible flow paths. The flow path model can be used to optimize not only the suppliers to each receiver but also their associated flow paths for supplying water. This characteristic leads to the optimum solution that contains the optimal scheduling results and detailed information concerning water distribution in the physical system. That is, the water rights owner, water quantity, water location, and associated flow path of each delivery action are represented explicitly in the results rather than merely as an optimized total flow quantity in each arc of a distribution network. We first verify the proposed methodology on a hypothetical water distribution system. Then we apply the methodology to the water distribution system associated with the Tou-Qian River basin in northern Taiwan. The results show that the flow path model can be used to optimize the quantity of each water delivery, the associated flow path, and the water trade and transfer strategy.
Model Assessment and Optimization Using a Flow Time Transformation
NASA Astrophysics Data System (ADS)
Smith, T. J.; Marshall, L. A.; McGlynn, B. L.
2012-12-01
Hydrologic modeling is a particularly complex problem that is commonly confronted with complications due to multiple dominant streamflow states, temporal switching of streamflow generation mechanisms, and dynamic responses to model inputs based on antecedent conditions. These complexities can inhibit the development of model structures and their fitting to observed data. As a result of these complexities and the heterogeneity that can exist within a catchment, optimization techniques are typically employed to obtain reasonable estimates of model parameters. However, when calibrating a model, the cost function itself plays a large role in determining the "optimal" model parameters. In this study, we introduce a transformation that allows for the estimation of model parameters in the "flow time" domain. The flow time transformation dynamically weights streamflows in the time domain, effectively stretching time during high streamflows and compressing time during low streamflows. Given the impact of cost functions on model optimization, such transformations focus on the hydrologic fluxes themselves rather than on equal time weighting common to traditional approaches. The utility of such a transform is of particular note to applications concerned with total hydrologic flux (water resources management, nutrient loading, etc.). The flow time approach can improve the predictive consistency of total fluxes in hydrologic models and provide insights into model performance by highlighting model strengths and deficiencies in an alternate modeling domain. Flow time transformations can also better remove positive skew from the streamflow time series, resulting in improved model fits, satisfaction of the normality assumption of model residuals, and enhanced uncertainty quantification. We illustrate the value of this transformation for two distinct sets of catchment conditions (snow-dominated and subtropical).
Optimization of murine model for Besnoitia caprae.
Oryan, A; Sadoughifar, R; Namavari, M
2016-09-01
It has been shown that mice, particularly the BALB/c ones, are susceptible to infection by some of the apicomplexan parasites. To compare the susceptibility of the inbred BALB/c, outbred BALB/c and C57 BL/6 to Besnoitia caprae inoculation and to determine LD50, 30 male inbred BALB/c, 30 outbred BALB/c and 30 C57 BL/6 mice were assigned into 18 groups of 5 mice. Each group was inoculated intraperitoneally with 12.5 × 10(3), 25 × 10(3), 5 × 10(4), 1 × 10(5), 2 × 10(5) tachyzoites and a control inoculum of DMEM, respectively. The inbred BALB/c was found the most susceptible strain among the experienced mice strains so the LD50 per inbred BALB/c mouse was calculated as 12.5 × 10(3.6) tachyzoites while the LD50 for the outbred BALB/c and C57 BL/6 was 25 × 10(3.4) and 5 × 10(4) tachyzoites per mouse, respectively. To investigate the impact of different routes of inoculation in the most susceptible mice strain, another seventy five male inbred BALB/c mice were inoculated with 2 × 10(5) tachyzoites of B. caprae via various inoculation routes including: subcutaneous, intramuscular, intraperitoneal, infraorbital and oral. All the mice in the oral and infraorbital groups survived for 60 days, whereas the IM group showed quicker death and more severe pathologic lesions, which was then followed by SC and IP groups. Therefore, BALB/c mouse is a proper laboratory model and IM inoculation is an ideal method in besnoitiosis induction and a candidate in treatment, prevention and testing the efficacy of vaccines for besnoitiosis. PMID:27605770
Optimal calibration method for water distribution water quality model.
Wu, Zheng Yi
2006-01-01
A water quality model is to predict water quality transport and fate throughout a water distribution system. The model is not only a promising alternative for analyzing disinfectant residuals in a cost-effective manner, but also a means of providing enormous engineering insights into the characteristics of water quality variation and constituent reactions. However, a water quality model is a reliable tool only if it predicts what a real system behaves. This paper presents a methodology that enables a modeler to efficiently calibrate a water quality model such that the field observed water quality values match with the model simulated values. The method is formulated to adjust the global water quality parameters and also the element-dependent water quality reaction rates for pipelines and tank storages. A genetic algorithm is applied to optimize the model parameters by minimizing the difference between the model-predicted values and the field-observed values. It is seamlessly integrated with a well-developed hydraulic and water quality modeling system. The approach has provided a generic tool and methodology for engineers to construct the sound water quality model in expedient manner. The method is applied to a real water system and demonstrated that a water quality model can be optimized for managing adequate water supply to public communities. PMID:16854809
Optimizing experimental design for comparing models of brain function.
Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas
2011-11-01
This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485
Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.
Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina
2016-08-25
The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course. PMID:27150459
Optimization and analysis of a CFJ-airfoil using adaptive meta-model based design optimization
NASA Astrophysics Data System (ADS)
Whitlock, Michael D.
Although strong potential for Co-Flow Jet (CFJ) flow separation control system has been demonstrated in existing literature, there has been little effort applied towards the optimization of the design for a given application. The high dimensional design space makes any optimization computationally intensive. This work presents the optimization of a CFJ airfoil as applied to a low Reynolds Number regimen using meta-model based design optimization (MBDO). The approach consists of computational fluid dynamics (CFD) analysis coupled with a surrogate model derived using Kriging. A genetic algorithm (GA) is then used to perform optimization on the efficient surrogate model. MBDO was shown to be an effective and efficient approach to solving the CFJ design problem. The final solution set was found to decrease drag by 100% while increasing lift by 42%. When validated, the final solution was found to be within one standard deviation of the CFD model it was representing.
Block-oriented modeling of superstructure optimization problems
Friedman, Z; Ingalls, J; Siirola, JD; Watson, JP
2013-10-15
We present a novel software framework for modeling large-scale engineered systems as mathematical optimization problems. A key motivating feature in such systems is their hierarchical, highly structured topology. Existing mathematical optimization modeling environments do not facilitate the natural expression and manipulation of hierarchically structured systems. Rather, the modeler is forced to "flatten" the system description, hiding structure that may be exploited by solvers, and obfuscating the system that the modeling environment is attempting to represent. To correct this deficiency, we propose a Python-based "block-oriented" modeling approach for representing the discrete components within the system. Our approach is an extension of the Pyomo library for specifying mathematical optimization problems. Through the use of a modeling components library, the block-oriented approach facilitates a clean separation of system superstructure from the details of individual components. This approach also naturally lends itself to expressing design and operational decisions as disjunctive expressions over the component blocks. By expressing a mathematical optimization problem in a block-oriented manner, inherent structure (e.g., multiple scenarios) is preserved for potential exploitation by solvers. In particular, we show that block-structured mathematical optimization problems can be straightforwardly manipulated by decomposition-based multi-scenario algorithmic strategies, specifically in the context of the PySP stochastic programming library. We illustrate our block-oriented modeling approach using a case study drawn from the electricity grid operations domain: unit commitment with transmission switching and N - 1 reliability constraints. Finally, we demonstrate that the overhead associated with block-oriented modeling only minimally increases model instantiation times, and need not adversely impact solver behavior. (C) 2013 Elsevier Ltd. All rights reserved.
A dynamic, optimal disease control model for foot-and-mouth disease: I. Model description.
Kobayashi, Mimako; Carpenter, Tim E; Dickey, Bradley F; Howitt, Richard E
2007-05-16
A dynamic optimization model was developed and used to evaluate alternative foot-and-mouth disease (FMD) control strategies. The model chose daily control strategies of depopulation and vaccination that minimized total regional cost for the entire epidemic duration, given disease dynamics and resource constraints. The disease dynamics and the impacts of control strategies on these dynamics were characterized in a set of difference equations; effects of movement restrictions on the disease dynamics were also considered. The model was applied to a three-county region in the Central Valley of California; the epidemic relationships were parameterized and validated using the information obtained from an FMD simulation model developed for the same region. The optimization model enables more efficient searches for desirable control strategies by considering all strategies simultaneously, providing the simulation model with optimization results to direct it in generating detailed predictions of potential FMD outbreaks. PMID:17280729
Optimal control of information epidemics modeled as Maki Thompson rumors
NASA Astrophysics Data System (ADS)
Kandhway, Kundan; Kuri, Joy
2014-12-01
We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.
Hydro- abrasive jet machining modeling for computer control and optimization
NASA Astrophysics Data System (ADS)
Groppetti, R.; Jovane, F.
1993-06-01
Use of hydro-abrasive jet machining (HAJM) for machining a wide variety of materials—metals, poly-mers, ceramics, fiber-reinforced composites, metal-matrix composites, and bonded or hybridized mate-rials—primarily for two- and three-dimensional cutting and also for drilling, turning, milling, and deburring, has been reported. However, the potential of this innovative process has not been explored fully. This article discusses process control, integration, and optimization of HAJM to establish a plat-form for the implementation of real-time adaptive control constraint (ACC), adaptive control optimiza-tion (ACO), and CAD/CAM integration. It presents the approach followed and the main results obtained during the development, implementation, automation, and integration of a HAJM cell and its computer-ized controller. After a critical analysis of the process variables and models reported in the literature to identify process variables and to define a process model suitable for HAJM real-time control and optimi-zation, to correlate process variables and parameters with machining results, and to avoid expensive and time-consuming experiments for determination of the optimal machining conditions, a process predic-tion and optimization model was identified and implemented. Then, the configuration of the HAJM cell, architecture, and multiprogramming operation of the controller in terms of monitoring, control, process result prediction, and process condition optimization were analyzed. This prediction and optimization model for selection of optimal machining conditions using multi-objective programming was analyzed. Based on the definition of an economy function and a productivity function, with suitable constraints relevant to required machining quality, required kerfing depth, and available resources, the model was applied to test cases based on experimental results.
Optimization models for flight test scheduling
NASA Astrophysics Data System (ADS)
Holian, Derreck
As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated
Computational modeling and optimization of proton exchange membrane fuel cells
NASA Astrophysics Data System (ADS)
Secanell Gallart, Marc
Improvements in performance, reliability and durability as well as reductions in production costs, remain critical prerequisites for the commercialization of proton exchange membrane fuel cells. In this thesis, a computational framework for fuel cell analysis and optimization is presented as an innovative alternative to the time consuming trial-and-error process currently used for fuel cell design. The framework is based on a two-dimensional through-the-channel isothermal, isobaric and single phase membrane electrode assembly (MEA) model. The model input parameters are the manufacturing parameters used to build the MEA: platinum loading, platinum to carbon ratio, electrolyte content and gas diffusion layer porosity. The governing equations of the fuel cell model are solved using Netwon's algorithm and an adaptive finite element method in order to achieve quadratic convergence and a mesh independent solution respectively. The analysis module is used to solve two optimization problems: (i) maximize performance; and, (ii) maximize performance while minimizing the production cost of the MEA. To solve these problems a gradient-based optimization algorithm is used in conjunction with analytical sensitivities. The presented computational framework is the first attempt in the literature to combine highly efficient analysis and optimization methods to perform optimization in order to tackle large-scale problems. The framework presented is capable of solving a complete MEA optimization problem with state-of-the-art electrode models in approximately 30 minutes. The optimization results show that it is possible to achieve Pt-specific power density for the optimized MEAs of 0.422 gPt/kW. This value is extremely close to the target of 0.4 gPt/kW for large-scale implementation and demonstrate the potential of using numerical optimization for fuel cell design.
Modelling complex terrain effects for wind farm layout optimization
NASA Astrophysics Data System (ADS)
Schmidt, Jonas; Stoevesandt, Bernhard
2014-06-01
The flow over four analytical hill geometries was calculated by CFD RANS simulations. For each hill, the results were converted into numerical models that transform arbitrary undisturbed inflow profiles by rescaling the effect of the obstacle. The predictions of such models are compared to full CFD results, first for atmospheric boundary layer flow, and then for a single turbine wake in the presence of an isolated hill. The implementation of the models into the wind farm modelling software flapFOAM is reported, advancing their inclusion into a fully modular wind farm layout optimization routine.
The effect of model uncertainty on some optimal routing problems
NASA Technical Reports Server (NTRS)
Mohanty, Bibhu; Cassandras, Christos G.
1991-01-01
The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.
A dynamic optimization model for solid waste recycling.
Anghinolfi, Davide; Paolucci, Massimo; Robba, Michela; Taramasso, Angela Celeste
2013-02-01
Recycling is an important part of waste management (that includes different kinds of issues: environmental, technological, economic, legislative, social, etc.). Differently from many works in literature, this paper is focused on recycling management and on the dynamic optimization of materials collection. The developed dynamic decision model is characterized by state variables, corresponding to the quantity of waste in each bin per each day, and control variables determining the quantity of material that is collected in the area each day and the routes for collecting vehicles. The objective function minimizes the sum of costs minus benefits. The developed decision model is integrated in a GIS-based Decision Support System (DSS). A case study related to the Cogoleto municipality is presented to show the effectiveness of the proposed model. From optimal results, it has been found that the net benefits of the optimized collection are about 2.5 times greater than the estimated current policy. PMID:23158873
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Model-Based Optimization for Flapping Foil Actuation
NASA Astrophysics Data System (ADS)
Izraelevitz, Jacob; Triantafyllou, Michael
2014-11-01
Flapping foil actuation in nature, such as wings and flippers, often consist of highly complex joint kinematics which present an impossibly large parameter space for designing bioinspired mechanisms. Designers therefore often build a simplified model to limit the parameter space so an optimum motion trajectory can be experimentally found, or attempt to replicate exactly the joint geometry and kinematics of a suitable organism whose behavior is assumed to be optimal. We present a compromise: using a simple local fluids model to guide the design of optimized trajectories through a succession of experimental trials, even when the parameter space is too large to effectively search. As an example, we illustrate an optimization routine capable of designing asymmetric flapping trajectories for a large aspect-ratio pitching and heaving foil, with the added degree of freedom of allowing the foil to move parallel to flow. We then present PIV flow visualizations of the optimized trajectories.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Strategies for Model Reduction: Comparing Different Optimal Bases.
NASA Astrophysics Data System (ADS)
Crommelin, D. T.; Majda, A. J.
2004-09-01
Several different ways of constructing optimal bases for efficient dynamical modeling are compared: empirical orthogonal functions (EOFs), optimal persistence patterns (OPPs), and principal interaction patterns (PIPs). Past studies on fluid-dynamical topics have pointed out that EOF-based models can have difficulties reproducing behavior dominated by irregular transitions between different dynamical states. This issue is addressed in a geophysical context, by assessing the ability of these strategies for efficient dynamical modeling to reproduce the chaotic regime transitions in a simple atmosphere model. The atmosphere model is the well-known Charney DeVore model, a six-dimensional truncation of the equations describing barotropic flow over topography in a β-plane channel geometry. This model is able to generate regime transitions for well-chosen parameter settings. The models based on PIPs are found to be superior to the EOF- and OPP-based models, in spite of some undesirable sensitivities inherent to the PIP method.
Optimal control of vaccine distribution in a rabies metapopulation model.
Asano, Erika; Gross, Louis J; Lenhart, Suzanne; Real, Leslie A
2008-04-01
We consider an SIR metapopulation model for the spread of rabies in raccoons. This system of ordinary differential equations considers subpopulations connected by movement. Vaccine for raccoons is distributed through food baits. We apply optimal control theory to find the best timing for distribution of vaccine in each of the linked subpopulations across the landscape. This strategy is chosen to limit the disease optimally by making the number of infections as small as possible while accounting for the cost of vaccination. PMID:18613731
The optimal inventory policy for EPQ model under trade credit
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2010-09-01
Huang and Huang [(2008), 'Optimal Inventory Replenishment Policy for the EPQ Model Under Trade Credit without Derivatives International Journal of Systems Science, 39, 539-546] use the algebraic method to determine the optimal inventory replenishment policy for the retailer in the extended model under trade credit. However, the algebraic method has its limit of application such that validities of proofs of Theorems 1-4 in Huang and Huang (2008) are questionable. The main purpose of this article is not only to indicate shortcomings but also to present the accurate proofs for Huang and Huang (2008).
Multi-objective parameter optimization of common land model using adaptive surrogate modeling
NASA Astrophysics Data System (ADS)
Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.
2015-05-01
Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20 to 100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) Regional LSMs are expensive to run, while conventional multi-objective optimization methods need a large number of model runs (typically ~105-106). It makes parameter optimization computationally prohibitive. An uncertainty quantification framework was developed to meet the aforementioned challenges, which include the following steps: (1) using parameter screening to reduce the number of adjustable parameters, (2) using surrogate models to emulate the responses of dynamic models to the variation of adjustable parameters, (3) using an adaptive strategy to improve the efficiency of surrogate modeling-based optimization; (4) using a weighting function to transfer multi-objective optimization to single-objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column application of a LSM - the Common Land Model (CoLM), and evaluate the effectiveness and efficiency of the proposed framework. The result indicate that this framework can efficiently achieve optimal parameters in a more effective way. Moreover, this result implies the possibility of calibrating other large complex dynamic models, such as regional-scale LSMs, atmospheric models and climate models.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.; Jacobsen, S. E.
1986-01-01
An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.
Shell model of optimal passive-scalar mixing
NASA Astrophysics Data System (ADS)
Miles, Christopher; Doering, Charles
2015-11-01
Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.
Data visualization optimization via computational modeling of perception.
Pineo, Daniel; Ware, Colin
2012-02-01
We present a method for automatically evaluating and optimizing visualizations using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and primary visual cortex. The neural activity resulting from viewing flow visualizations is simulated and evaluated to produce a metric of visualization effectiveness. Visualization optimization is achieved by applying this effectiveness metric as the utility function in a hill-climbing algorithm. We apply this method to the evaluation and optimization of 2D flow visualizations, using two visualization parameterizations: streaklet-based and pixel-based. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment. It had been previously hypothesized the effectiveness of head-to-tail alignment results from the perceptual processing of the visual system, but this theory had not been computationally modeled. A second optimization using a pixel-based parameterization resulted in a LIC-like result. The implications in terms of the selection of primitives is discussed. We argue that computational models can be used for optimizing complex visualizations. In addition, we argue that they can provide a means of computationally evaluating perceptual theories of visualization, and as a method for quality control of display methods. PMID:21383402
Plasma jet accelerator optimization with supple membrane model
NASA Astrophysics Data System (ADS)
Galkin, S. A.; Bogatu, I. N.; Kim, J. S.
2006-10-01
High density (>=3x10^17cm-3) and high Mach number (M>10) plasma jets have important applications such as plasma rotation, refueling and disruption mitigation in tokamaks. The most deleterious blow-by instability occurs in coaxial plasma accelerators; hence electrode shape optimization is required to accelerate plasmas to ˜200 km/s [1]. A full 3D particle simulation takes a huge computational time. We have developed a membrane model to provide a good starting point and further physical insight for a full 3D optimization. Our model approximates the axisymmetrical plasma by a thin supple conducting membrane with a distributed mass, located between the electrodes, and connects them to model dynamics of the blow-by instability and to conduct the optimization. The supple membrane is allowed to slip along the conductors freely or with some friction as affected by Lorenz force, generated by magnetic field inside the chamber and current on membrane. The total mass and the density distribution represent the initial plasma. The density is redistributed adiabatically during the acceleration. An external electrical circuit with capacitance, inductance and resistivity is a part of the model. The membrane model simulation results will be compared to the 2D fluid MACH2 results and then will be used to guide a full 3D optimization by the LSP code. 1. http://hyperv.com/projects/pic/
Applied topology optimization of vibro-acoustic hearing instrument models
NASA Astrophysics Data System (ADS)
Søndergaard, Morten Birkmose; Pedersen, Claus B. W.
2014-02-01
Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.
Time dependent optimal switching controls in online selling models
Bradonjic, Milan; Cohen, Albert
2010-01-01
We present a method to incorporate dishonesty in online selling via a stochastic optimal control problem. In our framework, the seller wishes to maximize her average wealth level W at a fixed time T of her choosing. The corresponding Hamilton-Jacobi-Bellmann (HJB) equation is analyzed for a basic case. For more general models, the admissible control set is restricted to a jump process that switches between extreme values. We propose a new approach, where the optimal control problem is reduced to a multivariable optimization problem.
Aeroelastic Optimization Study Based on X-56A Model
NASA Technical Reports Server (NTRS)
Li, Wesley; Pak, Chan-Gi
2014-01-01
A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.
Pumping Optimization Model for Pump and Treat Systems - 15091
Baker, S.; Ivarson, Kristine A.; Karanovic, M.; Miller, Charles W.; Tonkin, M.
2015-01-15
Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It produces predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
Rogers, Adam; Fiege, Jason D.
2011-02-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration
Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.
ERIC Educational Resources Information Center
Belgard, Maria R.; Min, Leo Yoon-Gee
An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…
Geometry Modeling and Grid Generation for Design and Optimization
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.
Verifying and Validating Proposed Models for FSW Process Optimization
NASA Technical Reports Server (NTRS)
Schneider, Judith
2008-01-01
This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms
Hyperopt: a Python library for model selection and hyperparameter optimization
NASA Astrophysics Data System (ADS)
Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.
2015-01-01
Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.
Electrochemical model based charge optimization for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Pramanik, Sourav; Anwar, Sohel
2016-05-01
In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.
Analytical models integrated with satellite images for optimized pest management
Technology Transfer Automated Retrieval System (TEKTRAN)
The global field protection (GFP) was developed to protect and optimize pest management resources integrating satellite images for precise field demarcation with physical models of controlled release devices of pesticides to protect large fields. The GFP was implemented using a graphical user interf...
Optimal Control of a Dengue Epidemic Model with Vaccination
NASA Astrophysics Data System (ADS)
Rodrigues, Helena Sofia; Teresa, M.; Monteiro, T.; Torres, Delfim F. M.
2011-09-01
We present a SIR+ASI epidemic model to describe the interaction between human and dengue fever mosquito populations. A control strategy in the form of vaccination, to decrease the number of infected individuals, is used. An optimal control approach is applied in order to find the best way to fight the disease.
Metabolic engineering with multi-objective optimization of kinetic models.
Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Balsa-Canto, Eva; Banga, Julio R
2016-03-20
Kinetic models have a great potential for metabolic engineering applications. They can be used for testing which genetic and regulatory modifications can increase the production of metabolites of interest, while simultaneously monitoring other key functions of the host organism. This work presents a methodology for increasing productivity in biotechnological processes exploiting dynamic models. It uses multi-objective dynamic optimization to identify the combination of targets (enzymatic modifications) and the degree of up- or down-regulation that must be performed in order to optimize a set of pre-defined performance metrics subject to process constraints. The capabilities of the approach are demonstrated on a realistic and computationally challenging application: a large-scale metabolic model of Chinese Hamster Ovary cells (CHO), which are used for antibody production in a fed-batch process. The proposed methodology manages to provide a sustained and robust growth in CHO cells, increasing productivity while simultaneously increasing biomass production, product titer, and keeping the concentrations of lactate and ammonia at low values. The approach presented here can be used for optimizing metabolic models by finding the best combination of targets and their optimal level of up/down-regulation. Furthermore, it can accommodate additional trade-offs and constraints with great flexibility. PMID:26826510
Discover for Yourself: An Optimal Control Model in Insect Colonies
ERIC Educational Resources Information Center
Winkel, Brian
2013-01-01
We describe the enlightening path of self-discovery afforded to the teacher of undergraduate mathematics. This is demonstrated as we find and develop background material on an application of optimal control theory to model the evolutionary strategy of an insect colony to produce the maximum number of queen or reproducer insects in the colony at…
Review of Optimization Methods in Groundwater Modeling and Management
NASA Astrophysics Data System (ADS)
Yeh, W. W.
2001-12-01
This paper surveys nonlinear optimization methods developed for groundwater modeling and management. The first part reviews algorithms used for model calibration, that is, the inverse problem of parameter estimation. In recent years, groundwater models are combined with optimization models to identify the best management alternatives. Once the objectives and constraints are specified, most problems lend themselves to solution techniques developed in operations research, optimal control, and combinatorial optimization. The second part reviews methods developed for groundwater management. Algorithms and methods reviewed include quadratic programming, differential dynamic programming, nonlinear programming, mixed integer programming, stochastic programming, and non-gradient-based search algorithms. Advantages and drawbacks associated with each approach are discussed. A recent tendency has been toward combining the gradient-based algorithms with the non-gradient-based search algorithms, in that, a non-gradient-based search algorithm is used to identify a near optimum solution and a gradient-based algorithm uses the near optimum solution as its initial estimate for rapid convergence.
Water-resources optimization model for Santa Barbara, California
Nishikawa, T.
1998-01-01
A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.
To the optimization problem in minority game model
Yanishevsky, Vasyl
2009-12-14
The article presents the research results of the optimization problem in minority game model to a gaussian approximation using replica symmetry breaking by one step (1RSB). A comparison to replica symmetry approximation (RS) and the results from literary sources received using other methods has been held.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
NASA Astrophysics Data System (ADS)
Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu
2016-03-01
Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.
Optimal thermalization in a shell model of homogeneous turbulence
NASA Astrophysics Data System (ADS)
Thalabard, Simon; Turkington, Bruce
2016-04-01
We investigate the turbulence-induced dissipation of the large scales in a statistically homogeneous flow using an ‘optimal closure,’ which one of us (BT) has recently exposed in the context of Hamiltonian dynamics. This statistical closure employs a Gaussian model for the turbulent scales, with corresponding vanishing third cumulant, and yet it captures an intrinsic damping. The key to this apparent paradox lies in a clear distinction between true ensemble averages and their proxies, most easily grasped when one works directly with the Liouville equation rather than the cumulant hierarchy. We focus on a simple problem for which the optimal closure can be fully and exactly worked out: the relaxation arbitrarily far-from-equilibrium of a single energy shell towards Gibbs equilibrium in an inviscid shell model of 3D turbulence. The predictions of the optimal closure are validated against DNS and contrasted with those derived from EDQNM closure.
Modeling of biological intelligence for SCM system optimization.
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724
Modeling of Biological Intelligence for SCM System Optimization
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724
Asymmetric optimal-velocity car-following model
NASA Astrophysics Data System (ADS)
Xu, Xihua; Pang, John; Monterola, Christopher
2015-10-01
Taking the asymmetric characteristic of the velocity differences of vehicles into account, we present an asymmetric optimal velocity model for a car-following theory. The asymmetry between the acceleration and the deceleration is represented by the exponential function with an asymmetrical factor, which agrees with the published experiment. This model avoids the disadvantage of the unrealistically high acceleration appearing in previous models when the velocity difference becomes large. This model is simple and only has two independent parameters. The linear stability condition is derived and the phase transition of the traffic flow appears beyond the critical density. The strength of interaction between clusters is shown to increase with the asymmetry factor in our model.
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides
NASA Astrophysics Data System (ADS)
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
Optimal control in a model of malaria with differential susceptibility
NASA Astrophysics Data System (ADS)
Hincapié, Doracelly; Ospina, Juan
2014-06-01
A malaria model with differential susceptibility is analyzed using the optimal control technique. In the model the human population is classified as susceptible, infected and recovered. Susceptibility is assumed dependent on genetic, physiological, or social characteristics that vary between individuals. The model is described by a system of differential equations that relate the human and vector populations, so that the infection is transmitted to humans by vectors, and the infection is transmitted to vectors by humans. The model considered is analyzed using the optimal control method when the control consists in using of insecticide-treated nets and educational campaigns; and the optimality criterion is to minimize the number of infected humans, while keeping the cost as low as is possible. One first goal is to determine the effects of differential susceptibility in the proposed control mechanism; and the second goal is to determine the algebraic form of the basic reproductive number of the model. All computations are performed using computer algebra, specifically Maple. It is claimed that the analytical results obtained are important for the design and implementation of control measures for malaria. It is suggested some future investigations such as the application of the method to other vector-borne diseases such as dengue or yellow fever; and also it is suggested the possible application of free software of computer algebra like Maxima.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Simulation/optimization modeling for robust pumping strategy design.
Kalwij, Ineke M; Peralta, Richard C
2006-01-01
A new simulation/optimization modeling approach is presented for addressing uncertain knowledge of aquifer parameters. The Robustness Enhancing Optimizer (REO) couples genetic algorithm and tabu search as optimizers and incorporates aquifer parameter sensitivity analysis to guide multiple-realization optimization. The REO maximizes strategy robustness for a pumping strategy that is optimal for a primary objective function (OF), such as cost. The more robust a strategy, the more likely it is to achieve management goals in the field, even if the physical system differs from the model. The REO is applied to trinitrotoluene and Royal Demolition Explosive plumes at Umatilla Chemical Depot in Oregon to develop robust least cost strategies. The REO efficiently develops robust pumping strategies while maintaining the optimal value of the primary OF-differing from the common situation in which a primary OF value degrades as strategy reliability increases. The REO is especially valuable where data to develop realistic probability density functions (PDFs) or statistically derived realizations are unavailable. Because they require much less field data, REO-developed strategies might not achieve as high a mathematical reliability as strategies developed using many realizations based upon real aquifer parameter PDFs. REO-developed strategies might or might not yield a better OF value in the field. PMID:16857035
Aeroelastic Optimization Study Based on the X-56A Model
NASA Technical Reports Server (NTRS)
Li, Wesley W.; Pak, Chan-Gi
2014-01-01
One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.
Optimal uncertainty quantification with model uncertainty and legacy data
NASA Astrophysics Data System (ADS)
Kamga, P.-H. T.; Li, B.; McKerns, M.; Nguyen, L. H.; Ortiz, M.; Owhadi, H.; Sullivan, T. J.
2014-12-01
We present an optimal uncertainty quantification (OUQ) protocol for systems that are characterized by an existing physics-based model and for which only legacy data is available, i.e., no additional experimental testing of the system is possible. Specifically, the OUQ strategy developed in this work consists of using the legacy data to establish, in a probabilistic sense, the level of error of the model, or modeling error, and to subsequently use the validated model as a basis for the determination of probabilities of outcomes. The quantification of modeling uncertainty specifically establishes, to a specified confidence, the probability that the actual response of the system lies within a certain distance of the model. Once the extent of model uncertainty has been established in this manner, the model can be conveniently used to stand in for the actual or empirical response of the system in order to compute probabilities of outcomes. To this end, we resort to the OUQ reduction theorem of Owhadi et al. (2013) in order to reduce the computation of optimal upper and lower bounds on probabilities of outcomes to a finite-dimensional optimization problem. We illustrate the resulting UQ protocol by means of an application concerned with the response to hypervelocity impact of 6061-T6 Aluminum plates by Nylon 6/6 impactors at impact velocities in the range of 5-7 km/s. The ability of the legacy OUQ protocol to process diverse information on the system and its ability to supply rigorous bounds on system performance under realistic-and less than ideal-scenarios demonstrated by the hypervelocity impact application is remarkable.
A Simple Model of Optimal Population Coding for Sensory Systems
Doi, Eizaburo; Lewicki, Michael S.
2014-01-01
A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery. PMID:25121492
Health benefit modelling and optimization of vehicular pollution control strategies
NASA Astrophysics Data System (ADS)
Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra
2012-12-01
This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific
A model for HIV/AIDS pandemic with optimal control
NASA Astrophysics Data System (ADS)
Sule, Amiru; Abdullah, Farah Aini
2015-05-01
Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.
NASA Astrophysics Data System (ADS)
Pham, H. V.; Tsai, F. T. C.
2014-12-01
Groundwater systems are complex and subject to multiple interpretations and conceptualizations due to a lack of sufficient information. As a result, multiple conceptual models are often developed and their mean predictions are preferably used to avoid biased predictions from using a single conceptual model. Yet considering too many conceptual models may lead to high prediction uncertainty and may lose the purpose of model development. In order to reduce the number of models, an optimal observation network design is proposed based on maximizing the Kullback-Leibler (KL) information to discriminate competing models. The KL discrimination function derived by Box and Hill [1967] for one additional observation datum at a time is expanded to account for multiple independent spatiotemporal observations. The Bayesian model averaging (BMA) method is used to incorporate existing data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. To consider the future observation uncertainty, the Monte Carlo realizations of BMA predicted future observations are used to calculate the mean and variance of posterior model probabilities of the competing models. The goal of the optimal observation network design is to find the number and location of observation wells and sampling rounds such that the highest posterior model probability of a model is larger than a desired probability criterion (e.g., 95%). The optimal observation network design is implemented to a groundwater study in the Baton Rouge area, Louisiana to collect new groundwater heads from USGS wells. The considered sources of uncertainty that create multiple groundwater models are the geological architecture, the boundary condition, and the fault permeability architecture. All possible design solutions are enumerated using high performance computing systems. Results show that total model variance (the sum of within-model variance and between-model
Linear versus quadratic portfolio optimization model with transaction cost
NASA Astrophysics Data System (ADS)
Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah
2014-06-01
Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.
Parameter optimization in differential geometry based solvation models.
Wang, Bao; Wei, G W
2015-10-01
Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304
The PDB_REDO server for macromolecular structure model optimization
Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis
2014-01-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallographers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342
Optimal model-free prediction from multivariate time series.
Runge, Jakob; Donner, Reik V; Kurths, Jürgen
2015-05-01
Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation. PMID:26066231
Modeling and Optimizing Space Networks for Improved Communication Capacity
NASA Astrophysics Data System (ADS)
Spangelo, Sara C.
There are a growing number of individual and constellation small satellite missions seeking to download large quantities of science, observation, and surveillance data. The existing ground station infrastructure to support these missions constrains the potential data throughput because the stations are low-cost, are not always available because they are independently owned and operated, and their ability to collect data is often inefficient. The constraints of the small satellite form factor (e.g. mass, size, power) coupled with the ground network limitations lead to significant operational and communication scheduling challenges. Faced with these challenges, our goal is to maximize capacity, defined as the amount of data that is successfully downloaded from space to ground communication nodes. In this thesis, we develop models, tools, and optimization algorithms for spacecraft and ground network operations. First, we develop an analytical modeling framework and a high-fidelity simulation environment that capture the interaction of on-board satellite energy and data dynamics, ground stations, and the external space environment. Second, we perform capacity-based assessments to identify excess and deficient resources for comparison to mission-specific requirements. Third, we formulate and solve communication scheduling problems that maximize communication capacity for a satellite downloading to a network of globally and functionally heterogeneous ground stations. Numeric examples demonstrate the applicability of the models and tools to assess and optimize real-world existing and upcoming small satellite mission scenarios that communicate to global ground station networks as well as generic communication scheduling problem instances. We study properties of optimal satellite communication schedules and sensitivity of communication capacity to various deterministic and stochastic satellite vehicle and network parameters. The models, tools, and optimization techniques we
A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment
NASA Astrophysics Data System (ADS)
Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir
2015-07-01
This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Optimal inference with suboptimal models: Addiction and active Bayesian inference
Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl
2015-01-01
When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-10-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more
Advanced Nuclear Fuel Cycle Transitions: Optimization, Modeling Choices, and Disruptions
NASA Astrophysics Data System (ADS)
Carlsen, Robert W.
Many nuclear fuel cycle simulators have evolved over time to help understan the nuclear industry/ecosystem at a macroscopic level. Cyclus is one of th first fuel cycle simulators to accommodate larger-scale analysis with it liberal open-source licensing and first-class Linux support. Cyclus also ha features that uniquely enable investigating the effects of modeling choices o fuel cycle simulators and scenarios. This work is divided into thre experiments focusing on optimization, effects of modeling choices, and fue cycle uncertainty. Effective optimization techniques are developed for automatically determinin desirable facility deployment schedules with Cyclus. A novel method fo mapping optimization variables to deployment schedules is developed. Thi allows relationships between reactor types and scenario constraints to b represented implicitly in the variable definitions enabling the usage o optimizers lacking constraint support. It also prevents wasting computationa resources evaluating infeasible deployment schedules. Deployed power capacit over time and deployment of non-reactor facilities are also included a optimization variables There are many fuel cycle simulators built with different combinations o modeling choices. Comparing results between them is often difficult. Cyclus flexibility allows comparing effects of many such modeling choices. Reacto refueling cycle synchronization and inter-facility competition among othe effects are compared in four cases each using combinations of fleet of individually modeled reactors with 1-month or 3-month time steps. There are noticeable differences in results for the different cases. The larges differences occur during periods of constrained reactor fuel availability This and similar work can help improve the quality of fuel cycle analysi generally There is significant uncertainty associated deploying new nuclear technologie such as time-frames for technology availability and the cost of buildin advanced reactors
An optimization model for long-range transmission expansion planning
Santos, A. Jr.; Franca, P.M.; Said, A.
1989-02-01
In this paper is presented a static network synthesis method applied to transmission expansion planning. The static synthesis problem is formulated as a mixed-integer network flow model that is solved by an implicit enumeration algorithm. This model considers as the objective function the most productive trade off, resulting in low investment costs and good electrical performance. The load and generation nodal equations are considered in the constraints of the model. The power transmission law of DC load flow is implicit in the optimization model. Results of computational tests are presented and they show the advantage of this method compared with a heuristic procedure. The case studies show a comparison of computational times and costs of solutions obtained for the Brazilian North-Northeast transmission system.
A mathematical model on the optimal timing of offspring desertion.
Seno, Hiromi; Endo, Hiromi
2007-06-01
We consider the offspring desertion as the optimal strategy for the deserter parent, analyzing a mathematical model for its expected reproductive success. It is shown that the optimality of the offspring desertion significantly depends on the offsprings' birth timing in the mating season, and on the other ecological parameters characterizing the innate nature of considered animals. Especially, the desertion is less likely to occur for the offsprings born in the later period of mating season. It is also implied that the offspring desertion after a partially biparental care would be observable only with a specific condition. PMID:17328918
CPOPT : optimization for fitting CANDECOMP/PARAFAC models.
Dunlavy, Daniel M.; Kolda, Tamara Gibson; Acar, Evrim
2008-10-01
Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.
Dynamic stochastic optimization models for air traffic flow management
NASA Astrophysics Data System (ADS)
Mukherjee, Avijit
This dissertation presents dynamic stochastic optimization models for Air Traffic Flow Management (ATFM) that enables decisions to adapt to new information on evolving capacities of National Airspace System (NAS) resources. Uncertainty is represented by a set of capacity scenarios, each depicting a particular time-varying capacity profile of NAS resources. We use the concept of a scenario tree in which multiple scenarios are possible initially. Scenarios are eliminated as possibilities in a succession of branching points, until the specific scenario that will be realized on a particular day is known. Thus the scenario tree branching provides updated information on evolving scenarios, and allows ATFM decisions to be re-addressed and revised. First, we propose a dynamic stochastic model for a single airport ground holding problem (SAGHP) that can be used for planning Ground Delay Programs (GDPs) when there is uncertainty about future airport arrival capacities. Ground delays of non-departed flights can be revised based on updated information from scenario tree branching. The problem is formulated so that a wide range of objective functions, including non-linear delay cost functions and functions that reflect equity concerns can be optimized. Furthermore, the model improves on existing practice by ensuring efficient use of available capacity without necessarily exempting long-haul flights. Following this, we present a methodology and optimization models that can be used for decentralized decision making by individual airlines in the GDP planning process, using the solutions from the stochastic dynamic SAGHP. Airlines are allowed to perform cancellations, and re-allocate slots to remaining flights by substitutions. We also present an optimization model that can be used by the FAA, after the airlines perform cancellation and substitutions, to re-utilize vacant arrival slots that are created due to cancellations. Finally, we present three stochastic integer programming
Fabrication, modeling and optimization of an ionic polymer gel actuator
NASA Astrophysics Data System (ADS)
Jo, Choonghee; Naguib, Hani E.; Kwon, Roy H.
2011-04-01
The modeling of the electro-active behavior of ionic polymer gel is studied and the optimum conditions that maximize the deflection of the gel are investigated. The bending deformation of polymer gel under an electric field is formulated by using chemo-electro-mechanical parameters. In the modeling, swelling and shrinking phenomena due to the differences in ion concentration at the boundary between the gel and solution are considered prior to the application of an electric field, and then bending actuation is applied. As the driving force of swelling, shrinking and bending deformation, differential osmotic pressure at the boundary of the gel and solution is considered. From this behavior, the strain or deflection of the gel is calculated. To find the optimum design parameter settings (electric voltage, thickness of gel, concentration of polyion in the gel, ion concentration in the solution, and degree of cross-linking in the gel) for bending deformation, a nonlinear constrained optimization model is formulated. In the optimization model, a bending deflection equation of the gel is used as an objective function, and a range of decision variables and their relationships are used as constraint equations. Also, actuation experiments are conducted using poly(2-acrylamido-2-methylpropane sulfonic acid) (PAMPS) gel and the optimum conditions predicted by the proposed model have been verified by the experiments.
Optimization of wind farm performance using low-order models
NASA Astrophysics Data System (ADS)
Dabiri, John; Brownstein, Ian
2015-11-01
A low order model that captures the dominant flow behaviors in a vertical-axis wind turbine (VAWT) array is used to maximize the power output of wind farms utilizing VAWTs. The leaky Rankine body model (LRB) was shown by Araya et al. (JRSE 2014) to predict the ranking of individual turbine performances in an array to within measurement uncertainty as compared to field data collected from full-scale VAWTs. Further, this model is able to predict array performance with significantly less computational expense than higher fidelity numerical simulations of the flow, making it ideal for use in optimization of wind farm performance. This presentation will explore the ability of the LRB model to rank the relative power output of different wind turbine array configurations as well as the ranking of individual array performance over a variety of wind directions, using various complex configurations tested in the field and simpler configurations tested in a wind tunnel. Results will be presented in which the model is used to determine array fitness in an evolutionary algorithm seeking to find optimal array configurations given a number of turbines, area of available land, and site wind direction profile. Comparison with field measurements will be presented.
Discrete-Time ARMAv Model-Based Optimal Sensor Placement
Song Wei; Dyke, Shirley J.
2008-07-08
This paper concentrates on the optimal sensor placement problem in ambient vibration based structural health monitoring. More specifically, the paper examines the covariance of estimated parameters during system identification using auto-regressive and moving average vector (ARMAv) model. By utilizing the discrete-time steady state Kalman filter, this paper realizes the structure's finite element (FE) model under broad-band white noise excitations using an ARMAv model. Based on the asymptotic distribution of the parameter estimates of the ARMAv model, both a theoretical closed form and a numerical estimate form of the covariance of the estimates are obtained. Introducing the information entropy (differential entropy) measure, as well as various matrix norms, this paper attempts to find a reasonable measure to the uncertainties embedded in the ARMAv model estimates. Thus, it is possible to select the optimal sensor placement that would lead to the smallest uncertainties during the ARMAv identification process. Two numerical examples are provided to demonstrate the methodology and compare the sensor placement results upon various measures.
Modeling and optimization of defense high level waste removal sequencing
NASA Astrophysics Data System (ADS)
Paul, Pran Krishna
A novel methodology has been developed which makes possible a very fast running computational tool, capable of performing 30 to 50 years of simulation of the entire Savannah River Site (SRS) high level waste complex in less than 2 minutes on a work station. The methodology has been implemented in the Production Planning Model (ProdMod) simulation code which uses Aspen Technology's dynamic simulation software development package SPEEDUP. ProdMod is a pseudo-dynamic simulation code solely based on algebraic equations, using no differential equations. The dynamic nature of the plant process is captured using linear constructs in which the time dependence is implicit. Another innovative approach implemented in ProdMod development is the mapping of event-space on to time-space and vice versa, which accelerates the computation without sacrificing the necessary details in the event-space. ProdMod uses this approach in coupling the time-space continuous simulation with the event-space batch simulation, avoiding the discontinuities inherent in dynamic simulation batch processing. In addition, a general purpose optimization scheme has been devised based on the pseudo-dynamic constructs and the event- and time-space algorithms of ProdMod. The optimization scheme couples a FORTRAN based stand-alone optimization driver with the SPEEDUP based ProdMod simulator to perform dynamic optimization. The scheme is capable of generating single or multiple optimal input conditions for different types of objective functions over single or multiple years of operations depending on the nature of the objective function and operating constraints. The resultant optimal inputs are then interfaced with ProdMod to simulate the dynamic behavior of the waste processing operations. At the conclusion on an optimized advancement step, the simulation parameters are then passed to the optimization driver to generate the next set of optimized parameters. An optimization algorithm using linear programming
Rapid Modeling, Assembly and Simulation in Design Optimization
NASA Technical Reports Server (NTRS)
Housner, Jerry
1997-01-01
A new capability for design is reviewed. This capability provides for rapid assembly of detail finite element models early in the design process where costs are most effectively impacted. This creates an engineering environment which enables comprehensive analysis and design optimization early in the design process. Graphical interactive computing makes it possible for the engineer to interact with the design while performing comprehensive design studies. This rapid assembly capability is enabled by the use of Interface Technology, to couple independently created models which can be archived and made accessible to the designer. Results are presented to demonstrate the capability.
Ozonation optimization and modeling for treating diesel-contaminated water.
Ziabari, Seyedeh-Somayeh Haghighat; Khezri, Seyed-Mostafa; Kalantary, Roshanak Rezaei
2016-03-15
The effect of ozonation on treatment of diesel-contaminated water was investigated on a laboratory scale. Factorial design and response surface methodology (RSM) were used to evaluate and optimize the effects of pH, ozone flow rate, and contact time on the treatment process. A Box-Behnken design was successfully applied for modeling and optimizing the removal of total petroleum hydrocarbons (TPHs). The results showed that ozonation is an efficient technique for removing diesel from aqueous solution. The determination coefficient (R(2)) was found to be 0.9437, indicating that the proposed model was capable of predicting the removal of TPHs by ozonation. The optimum values of experimental initial pH, degree of O3, and reaction time were 7.0, 1.5, and 35 min, respectively, which could contribute to approximately 60% of TPH removal. This result is in good agreement with the predicted value of 57.28%. PMID:26846995
Robust model predictive control for optimal continuous drug administration.
Sopasakis, Pantelis; Patrinos, Panagiotis; Sarimveis, Haralambos
2014-10-01
In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements. PMID:24986530
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Bader, Jon B.
2009-01-01
Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.
[Optimized models of logging-tending system in cutting areas].
Guo, J; Jing, Y; Zhang, R; Xiong, W; Su, J
2000-12-01
The comprehensive advantages of different logging-tending systems in Pinus massoniana forest cutting area were evaluated by set-pair analysis, based on the comparison of their economic and ecological benefits. The results showed that the optimized model for P. massoniana forests in Northern Fujian comprised 40% selective cutting, manual skidding, clear-cutting in ribbon, and natural regeneration with artificial aids, which could also be used in the nearby forests with conditions similar to the experimental area. PMID:11767550
Modeling Microinverters and DC Power Optimizers in PVWatts
MacAlpine, S.; Deline, C.
2015-02-01
Module-level distributed power electronics including microinverters and DC power optimizers are increasingly popular in residential and commercial PV systems. Consumers are realizing their potential to increase design flexibility, monitor system performance, and improve energy capture. It is becoming increasingly important to accurately model PV systems employing these devices. This document summarizes existing published documents to provide uniform, impartial recommendations for how the performance of distributed power electronics can be reflected in NREL's PVWatts calculator (http://pvwatts.nrel.gov/).
Mathematical model of the metal mould surface temperature optimization
Mlynek, Jaroslav Knobloch, Roman; Srb, Radek
2015-11-30
The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.
Optimization in generalized linear models: A case study
NASA Astrophysics Data System (ADS)
Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina
2016-06-01
The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.
Optimization model of vaccination strategy for dengue transmission
NASA Astrophysics Data System (ADS)
Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.
2014-02-01
Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.
Mathematical model of the metal mould surface temperature optimization
NASA Astrophysics Data System (ADS)
Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek
2015-11-01
The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.
NASA Astrophysics Data System (ADS)
WöHling, Thomas; Vrugt, Jasper A.
2008-12-01
Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multiobjective optimization and Bayesian model averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multiobjective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM and used to generate four different model ensembles. These ensembles are postprocessed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multiobjective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.
Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors
Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine
2007-06-01
Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.
An optimization model for the US Air-Traffic System
NASA Technical Reports Server (NTRS)
Mulvey, J. M.
1986-01-01
A systematic approach for monitoring U.S. air traffic was developed in the context of system-wide planning and control. Towards this end, a network optimization model with nonlinear objectives was chosen as the central element in the planning/control system. The network representation was selected because: (1) it provides a comprehensive structure for depicting essential aspects of the air traffic system, (2) it can be solved efficiently for large scale problems, and (3) the design can be easily communicated to non-technical users through computer graphics. Briefly, the network planning models consider the flow of traffic through a graph as the basic structure. Nodes depict locations and time periods for either individual planes or for aggregated groups of airplanes. Arcs define variables as actual airplanes flying through space or as delays across time periods. As such, a special case of the network can be used to model the so called flow control problem. Due to the large number of interacting variables and the difficulty in subdividing the problem into relatively independent subproblems, an integrated model was designed which will depict the entire high level (above 29000 feet) jet route system for the 48 contiguous states in the U.S. As a first step in demonstrating the concept's feasibility a nonlinear risk/cost model was developed for the Indianapolis Airspace. The nonlinear network program --NLPNETG-- was employed in solving the resulting test cases. This optimization program uses the Truncated-Newton method (quadratic approximation) for determining the search direction at each iteration in the nonlinear algorithm. It was shown that aircraft could be re-routed in an optimal fashion whenever traffic congestion increased beyond an acceptable level, as measured by the nonlinear risk function.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters. PMID:27136791
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Model reduction for chemical kinetics: An optimization approach
Petzold, L.; Zhu, W.
1999-04-01
The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.
Optimal control of CPR procedure using hemodynamic circulation model
Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok
2007-12-25
A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.
Finite state aeroelastic model for use in rotor design optimization
NASA Technical Reports Server (NTRS)
He, Chengjian; Peters, David A.
1993-01-01
In this article, a rotor aeroelastic model based on a newly developed finite state dynamic wake, coupled with blade finite element analysis, is described. The analysis is intended for application in rotor blade design optimization. A coupled simultaneous system of differential equations combining blade structural dynamics and aerodynamics is established in a formulation well-suited for design sensitivity computation. Each blade is assumed to be an elastic beam undergoing flap bending, lead-lag bending, elastic twist, and axial deflections. Aerodynamic loads are computed from unsteady blade element theory where the rotor three-dimensional unsteady wake is described by a generalized dynamic wake model. Correlation of results obtained from the analysis with flight test data is provided to assess model accuracy.
A comparison of motor submodels in the optimal control model
NASA Technical Reports Server (NTRS)
Lancraft, R. E.; Kleinman, D. L.
1978-01-01
Properties of several structural variations in the neuromotor interface portion of the optimal control model (OCM) are investigated. For example, it is known that commanding control-rate introduces an open-loop pole at S=O and will generate low frequency phase and magnitude characteristics similar to experimental data. However, this gives rise to unusually high sensitivities with respect to motor and sensor noise-ratios, thereby reducing the models' predictive capabilities. Relationships for different motor submodels are discussed to show sources of these sensitivities. The models investigated include both pseudo motor-noise and actual (system driving) motor-noise characterizations. The effects of explicit proprioceptive feedback in the OCM is also examined. To show graphically the effects of each submodel on system outputs, sensitivity studies are included, and compared to data obtained from other tests.
A control model for dependable hydropower capacity optimization
NASA Astrophysics Data System (ADS)
Georgakakos, Aris P.; Yao, Huaming; Yu, Yongqing
In this article a control model that can be used to determine the dependable power capacity of a hydropower system is presented and tested. The model structure consists of a turbine load allocation module and a reservoir control module and allows for a detailed representation of hydroelectric facilities and various aspects of water management. Although this scheme is developed for planning purposes, it can also be used operationally with minor modifications. The model is applied to the Lanier-Allatoona-Carters reservoir system on the Chattahoochee and Coosa River Basins, in the southeastern United States. The case studies demonstrate that the more traditional simulation-based approaches often underestimate dependable power capacity. Firm energy optimization with or without dependable capacity constraints is taken up in a companion article [Georgakakos et al., this issue].
Parameter Optimization for the Gaussian Model of Folded Proteins
NASA Astrophysics Data System (ADS)
Erman, Burak; Erkip, Albert
2000-03-01
Recently, we proposed an analytical model of protein folding (B. Erman, K. A. Dill, J. Chem. Phys, 112, 000, 2000) and showed that this model successfully approximates the known minimum energy configurations of two dimensional HP chains. All attractions (covalent and non-covalent) as well as repulsions were treated as if the monomer units interacted with each other through linear spring forces. Since the governing potential of the linear springs are derived from a Gaussian potential, the model is called the ''Gaussian Model''. The predicted conformations from the model for the hexamer and various 9mer sequences all lie on the square lattice, although the model does not contain information about the lattice structure. Results of predictions for chains with 20 or more monomers also agreed well with corresponding known minimum energy lattice structures. However, these predicted conformations did not lie exactly on the square lattice. In the present work, we treat the specific problem of optimizing the potentials (the strengths of the spring constants) so that the predictions are in better agreement with the known minimum energy structures.
Multiobjective optimization for model selection in kernel methods in regression.
You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M
2014-10-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art. PMID:25291740
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Multiobjective Optimization for Model Selection in Kernel Methods in Regression
You, Di; Benitez-Quiroz, C. Fabian; Martinez, Aleix M.
2016-01-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art. PMID:25291740
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
Automated Finite Element Modeling of Wing Structures for Shape Optimization
NASA Technical Reports Server (NTRS)
Harvey, Michael Stephen
1993-01-01
The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.
H2-optimal control with generalized state-space models for use in control-structure optimization
NASA Technical Reports Server (NTRS)
Wette, Matt
1991-01-01
Several advances are provided solving combined control-structure optimization problems. The author has extended solutions from H2 optimal control theory to the use of generalized state space models. The generalized state space models preserve the sparsity inherent in finite element models and hence provide some promise for handling very large problems. Also, expressions for the gradient of the optimal control cost are derived which use the generalized state space models.
Optimized diagnostic model combination for improving diagnostic accuracy
NASA Astrophysics Data System (ADS)
Kunche, S.; Chen, C.; Pecht, M. G.
Identifying the most suitable classifier for diagnostics is a challenging task. In addition to using domain expertise, a trial and error method has been widely used to identify the most suitable classifier. Classifier fusion can be used to overcome this challenge and it has been widely known to perform better than single classifier. Classifier fusion helps in overcoming the error due to inductive bias of various classifiers. The combination rule also plays a vital role in classifier fusion, and it has not been well studied which combination rules provide the best performance during classifier fusion. Good combination rules will achieve good generalizability while taking advantage of the diversity of the classifiers. In this work, we develop an approach for ensemble learning consisting of an optimized combination rule. The generalizability has been acknowledged to be a challenge for training a diverse set of classifiers, but it can be achieved by an optimal balance between bias and variance errors using the combination rule in this paper. Generalizability implies the ability of a classifier to learn the underlying model from the training data and to predict the unseen observations. In this paper, cross validation has been employed during performance evaluation of each classifier to get an unbiased performance estimate. An objective function is constructed and optimized based on the performance evaluation to achieve the optimal bias-variance balance. This function can be solved as a constrained nonlinear optimization problem. Sequential Quadratic Programming based optimization with better convergence property has been employed for the optimization. We have demonstrated the applicability of the algorithm by using support vector machine and neural networks as classifiers, but the methodology can be broadly applicable for combining other classifier algorithms as well. The method has been applied to the fault diagnosis of analog circuits. The performance of the proposed
Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...
WE-D-BRE-04: Modeling Optimal Concurrent Chemotherapy Schedules
Jeong, J; Deasy, J O
2014-06-15
Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-kill was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation.
Modeling and optimization of a hybrid solar combined cycle (HYCS)
NASA Astrophysics Data System (ADS)
Eter, Ahmad Adel
2011-12-01
The main objective of this thesis is to investigate the feasibility of integrating concentrated solar power (CSP) technology with the conventional combined cycle technology for electric generation in Saudi Arabia. The generated electricity can be used locally to meet the annual increasing demand. Specifically, it can be utilized to meet the demand during the hours 10 am-3 pm and prevent blackout hours, of some industrial sectors. The proposed CSP design gives flexibility in the operation system. Since, it works as a conventional combined cycle during night time and it switches to work as a hybrid solar combined cycle during day time. The first objective of the thesis is to develop a thermo-economical mathematical model that can simulate the performance of a hybrid solar-fossil fuel combined cycle. The second objective is to develop a computer simulation code that can solve the thermo-economical mathematical model using available software such as E.E.S. The developed simulation code is used to analyze the thermo-economic performance of different configurations of integrating the CSP with the conventional fossil fuel combined cycle to achieve the optimal integration configuration. This optimal integration configuration has been investigated further to achieve the optimal design of the solar field that gives the optimal solar share. Thermo-economical performance metrics which are available in the literature have been used in the present work to assess the thermo-economic performance of the investigated configurations. The economical and environmental impact of integration CSP with the conventional fossil fuel combined cycle are estimated and discussed. Finally, the optimal integration configuration is found to be solarization steam side in conventional combined cycle with solar multiple 0.38 which needs 29 hectare and LEC of HYCS is 63.17 $/MWh under Dhahran weather conditions.
Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami
2015-01-01
6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448
Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami
2015-01-01
6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448
Verification of immune response optimality through cybernetic modeling.
Batt, B C; Kompala, D S
1990-02-01
An immune response cascade that is T cell independent begins with the stimulation of virgin lymphocytes by antigen to differentiate into large lymphocytes. These immune cells can either replicate themselves or differentiate into plasma cells or memory cells. Plasma cells produce antibody at a specific rate up to two orders of magnitude greater than large lymphocytes. However, plasma cells have short life-spans and cannot replicate. Memory cells produce only surface antibody, but in the event of a subsequent infection by the same antigen, memory cells revert rapidly to large lymphocytes. Immunologic memory is maintained throughout the organism's lifetime. Many immunologists believe that the optimal response strategy calls for large lymphocytes to replicate first, then differentiate into plasma cells and when the antigen has been nearly eliminated, they form memory cells. A mathematical model incorporating the concept of cybernetics has been developed to study the optimality of the immune response. Derived from the matching law of microeconomics, cybernetic variables control the allocation of large lymphocytes to maximize the instantaneous antibody production rate at any time during the response in order to most efficiently inactivate the antigen. A mouse is selected as the model organism and bacteria as the replicating antigen. In addition to verifying the optimal switching strategy, results showing how the immune response is affected by antigen growth rate, initial antigen concentration, and the number of antibodies required to eliminate an antigen are included. PMID:2338827
Optimal aeroassisted coplanar orbital transfer using an energy model
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Taylor, Deborah B.
1989-01-01
The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
Traveling waves in an optimal velocity model of freeway traffic.
Berg, P; Woods, A
2001-03-01
Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137]. PMID:11308709
Traveling waves in an optimal velocity model of freeway traffic
NASA Astrophysics Data System (ADS)
Berg, Peter; Woods, Andrew
2001-03-01
Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137].
Considerations for parameter optimization and sensitivity in climate models.
Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E
2010-12-14
Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841
Mutation Size Optimizes Speciation in an Evolutionary Model
Dees, Nathan D.; Bahar, Sonya
2010-01-01
The role of mutation rate in optimizing key features of evolutionary dynamics has recently been investigated in various computational models. Here, we address the related question of how maximum mutation size affects the formation of species in a simple computational evolutionary model. We find that the number of species is maximized for intermediate values of a mutation size parameter μ; the result is observed for evolving organisms on a randomly changing landscape as well as in a version of the model where negative feedback exists between the local population size and the fitness provided by the landscape. The same result is observed for various distributions of mutation values within the limits set by μ. When organisms with various values of μ compete against each other, those with intermediate μ values are found to survive. The surviving values of μ from these competition simulations, however, do not necessarily coincide with the values that maximize the number of species. These results suggest that various complex factors are involved in determining optimal mutation parameters for any population, and may also suggest approaches for building a computational bridge between the (micro) dynamics of mutations at the level of individual organisms and (macro) evolutionary dynamics at the species level. PMID:20689827
Biomechanical modeling and optimal control of human posture.
Menegaldo, Luciano Luporini; Fleury, Agenor de Toledo; Weber, Hans Ingo
2003-11-01
The present work describes the biomechanical modeling of human postural mechanics in the saggital plane and the use of optimal control to generate open-loop raising-up movements from a squatting position. The biomechanical model comprises 10 equivalent musculotendon actuators, based on a 40 muscles model, and three links (shank, thigh and HAT-Head, Arms and Trunk). Optimal control solutions are achieved through algorithms based on the Consistent Approximations Theory (Schwartz and Polak, 1996), where the continuous non-linear dynamics is represented in a discrete space by means of a Runge-Kutta integration and the control signals in a spline-coefficient functional space. This leads to non-linear programming problems solved by a sequential quadratic programming (SQP) method. Due to the highly non-linear and unstable nature of the posture dynamics, numerical convergence is difficult, and specific strategies must be implemented in order to allow convergence. Results for control (muscular excitations) and angular trajectories are shown using two final simulation times, as well as specific control strategies are discussed. PMID:14522212
Optimal control model of arm configuration in a reaching task
NASA Astrophysics Data System (ADS)
Yamaguchi, Gary T.; Kakavand, Ali
1996-05-01
It was hypothesized that the configuration of the upper limb during a hand static positioning task could be predicted using a dynamic musculoskeletal model and an optimal control routine. Both rhesus monkey and human upper extremity models were formulated, and had seven degrees of freedom (7-DOF) and 39 musculotendon pathways. A variety of configurations were generated about a physiologically measured configuration using the dynamic models and perturbations. The pseudoinverse optimal control method was applied to compute the minimum cost C at each of the generated configurations. Cost function C is described by the Crowninshield-Brand (1981) criterion which relates C (the sum of muscle stresses squared) to the endurance time of a physiological task. The configuration with the minimum cost was compared to the configurations chosen by one monkey (four trials) and by eight human subjects (eight trials each). Results are generally good, but not for all joint angles, suggesting that muscular effort is likely to be one major factor in choosing a preferred static arm posture.
Multi-model groundwater-management optimization: reconciling disparate conceptual models
NASA Astrophysics Data System (ADS)
Timani, Bassel; Peralta, Richard
2015-09-01
Disagreement among policymakers often involves policy issues and differences between the decision makers' implicit utility functions. Significant disagreement can also exist concerning conceptual models of the physical system. Disagreement on the validity of a single simulation model delays discussion on policy issues and prevents the adoption of consensus management strategies. For such a contentious situation, the proposed multi-conceptual model optimization (MCMO) can help stakeholders reach a compromise strategy. MCMO computes mathematically optimal strategies that simultaneously satisfy analogous constraints and bounds in multiple numerical models that differ in boundary conditions, hydrogeologic stratigraphy, and discretization. Shadow prices and trade-offs guide the process of refining the first MCMO-developed `multi-model strategy into a realistic compromise management strategy. By employing automated cycling, MCMO is practical for linear and nonlinear aquifer systems. In this reconnaissance study, MCMO application to the multilayer Cache Valley (Utah and Idaho, USA) river-aquifer system employs two simulation models with analogous background conditions but different vertical discretization and boundary conditions. The objective is to maximize additional safe pumping (beyond current pumping), subject to constraints on groundwater head and seepage from the aquifer to surface waters. MCMO application reveals that in order to protect the local ecosystem, increased groundwater pumping can satisfy only 40 % of projected water demand increase. To explore the possibility of increasing that pumping while protecting the ecosystem, MCMO clearly identifies localities requiring additional field data. MCMO is applicable to other areas and optimization problems than used here. Steps to prepare comparable sub-models for MCMO use are area-dependent.
Optimal boson energy for superconductivity in the Holstein model
NASA Astrophysics Data System (ADS)
Lin, Chungwei; Wang, Bingnan; Teo, Koon Hoo
2016-06-01
We examine the superconducting solution in the Holstein model, where the conduction electrons couple to the dispersionless boson fields, using the Migdal-Eliashberg theory and dynamical mean field theory. Although different in numerical values, both methods imply the existence of an optimal boson energy for superconductivity at a given electron-boson coupling. This nonmonotonous behavior can be understood as an interplay between the polaron and superconducting physics, as the electron-boson coupling is the origin of the superconductor, but at the same time traps the conduction electrons making the system more insulating. Our calculation provides a simple explanation of the recent experiment on sulfur hydride, where an optimal pressure for the superconductivity was observed. The validities of both methods are discussed.
Optimal dividends in the Brownian motion risk model with interest
NASA Astrophysics Data System (ADS)
Fang, Ying; Wu, Rong
2009-07-01
In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.
Vibroacoustic optimization using a statistical energy analysis model
NASA Astrophysics Data System (ADS)
Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia
2016-08-01
In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.
Timber harvest planning a combined optimization/simulation model
Arthur, J.L.; Dykstra, D.P.
1980-11-01
A special cascading fixed charge model can be used to characterize a forest management planning problem in which the objectives are to identify the optimal shape of forest harvest cutting units and simultaneously to assign facilities for logging those units. A four-part methodology was developed to assist forest managers in analyzing areas proposed for harvesting. This methodology: analyzes harvesting feasibility; computes the optimal solution to the cascading fixed charge problem; undertakes a GASP IV simulation to provide additional information about the proposed harvesting operation; and permits the forest manager to perform a time-cost analysis that may lead to a more realistic, and thus improved, solution. (5 diagrams, 16 references, 3 tables)
Using Cotton Model Simulations to Estimate Optimally Profitable Irrigation Strategies
NASA Astrophysics Data System (ADS)
Mauget, S. A.; Leiker, G.; Sapkota, P.; Johnson, J.; Maas, S.
2011-12-01
In recent decades irrigation pumping from the Ogallala Aquifer has led to declines in saturated thickness that have not been compensated for by natural recharge, which has led to questions about the long-term viability of agriculture in the cotton producing areas of west Texas. Adopting irrigation management strategies that optimize profitability while reducing irrigation waste is one way of conserving the aquifer's water resource. Here, a database of modeled cotton yields generated under drip and center pivot irrigated and dryland production scenarios is used in a stochastic dominance analysis that identifies such strategies under varying commodity price and pumping cost conditions. This database and analysis approach will serve as the foundation for a web-based decision support tool that will help producers identify optimal irrigation treatments under specified cotton price, electricity cost, and depth to water table conditions.
Logit Model based Performance Analysis of an Optimization Algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. A.; Ospina, J. D.; Villada, D.
2011-09-01
In this paper, the performance of the Multi Dynamics Algorithm for Global Optimization (MAGO) is studied through simulation using five standard test functions. To guarantee that the algorithm converges to a global optimum, a set of experiments searching for the best combination between the only two MAGO parameters -number of iterations and number of potential solutions, are considered. These parameters are sequentially varied, while increasing the dimension of several test functions, and performance curves were obtained. The MAGO was originally designed to perform well with small populations; therefore, the self-adaptation task with small populations is more challenging while the problem dimension is higher. The results showed that the convergence probability to an optimal solution increases according to growing patterns of the number of iterations and the number of potential solutions. However, the success rates slow down when the dimension of the problem escalates. Logit Model is used to determine the mutual effects between the parameters of the algorithm.
In vitro placental model optimization for nanoparticle transport studies
Cartwright, Laura; Poulsen, Marie Sønnegaard; Nielsen, Hanne Mørck; Pojana, Giulio; Knudsen, Lisbeth E; Saunders, Margaret; Rytting, Erik
2012-01-01
Background Advances in biomedical nanotechnology raise hopes in patient populations but may also raise questions regarding biodistribution and biocompatibility, especially during pregnancy. Special consideration must be given to the placenta as a biological barrier because a pregnant woman’s exposure to nanoparticles could have significant effects on the fetus developing in the womb. Therefore, the purpose of this study is to optimize an in vitro model for characterizing the transport of nanoparticles across human placental trophoblast cells. Methods The growth of BeWo (clone b30) human placental choriocarcinoma cells for nanoparticle transport studies was characterized in terms of optimized Transwell® insert type and pore size, the investigation of barrier properties by transmission electron microscopy, tight junction staining, transepithelial electrical resistance, and fluorescein sodium transport. Following the determination of nontoxic concentrations of fluorescent polystyrene nanoparticles, the cellular uptake and transport of 50 nm and 100 nm diameter particles was measured using the in vitro BeWo cell model. Results Particle size measurements, fluorescence readings, and confocal microscopy indicated both cellular uptake of the fluorescent polystyrene nanoparticles and the transcellular transport of these particles from the apical (maternal) to the basolateral (fetal) compartment. Over the course of 24 hours, the apparent permeability across BeWo cells grown on polycarbonate membranes (3.0 μm pore size) was four times higher for the 50 nm particles compared with the 100 nm particles. Conclusion The BeWo cell line has been optimized and shown to be a valid in vitro model for studying the transplacental transport of nanoparticles. Fluorescent polystyrene nanoparticle transport was size-dependent, as smaller particles reached the basal (fetal) compartment at a higher rate. PMID:22334780
Design Oriented Structural Modeling for Airplane Conceptual Design Optimization
NASA Technical Reports Server (NTRS)
Livne, Eli
1999-01-01
The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.
Proficient brain for optimal performance: the MAP model perspective
di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio
2016-01-01
Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557
Proficient brain for optimal performance: the MAP model perspective.
Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio
2016-01-01
Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557
The reproductive value in distributed optimal control models.
Wrzaczek, Stefan; Kuhn, Michael; Prskawetz, Alexia; Feichtinger, Gustav
2010-05-01
We show that in a large class of distributed optimal control models (DOCM), where population is described by a McKendrick type equation with an endogenous number of newborns, the reproductive value of Fisher shows up as part of the shadow price of the population. Depending on the objective function, the reproductive value may be negative. Moreover, we show results of the reproductive value for changing vital rates. To motivate and demonstrate the general framework, we provide examples in health economics, epidemiology, and population biology. PMID:20096297
Numerical Modeling and Optimization of Warm-water Heat Sinks
NASA Astrophysics Data System (ADS)
Hadad, Yaser; Chiarot, Paul
2015-11-01
For cooling in large data-centers and supercomputers, water is increasingly replacing air as the working fluid in heat sinks. Utilizing water provides unique capabilities; for example: higher heat capacity, Prandtl number, and convection heat transfer coefficient. The use of warm, rather than chilled, water has the potential to provide increased energy efficiency. The geometric and operating parameters of the heat sink govern its performance. Numerical modeling is used to examine the influence of geometry and operating conditions on key metrics such as thermal and flow resistance. This model also facilitates studies on cooling of electronic chip hot spots and failure scenarios. We report on the optimal parameters for a warm-water heat sink to achieve maximum cooling performance.
Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana
2016-09-01
The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal
Optimization model for UV-Riboflavin corneal cross-linking
NASA Astrophysics Data System (ADS)
Schumacher, S.; Wernli, J.; Scherrer, S.; Bueehler, M.; Seiler, T.; Mrochen, M.
2011-03-01
Nowadays UV-cross-linking is an established method for the treatment of keraectasia. Currently a standardized protocol is used for the cross-linking treatment. We will now present a theoretical model which predicts the number of induced crosslinks in the corneal tissue, in dependence of the Riboflavin concentration, the radiation intensity, the pre-treatment time and the treatment time. The model is developed by merging the difussion equation, the equation for the light distribution in dependence on the absorbers in the tissue and a rate equation for the polymerization process. A higher concentration of Riboflavin solution as well as a higher irradiation intensity will increase the number of induced crosslinks. However, performed stress-strain experiments which support the model showed that higher Riboflavin concentrations (> 0.125%) do not result in a further increase in stability of the corneal tissue. This is caused by the inhomogeneous distribution of induced crosslinks throughout the cornea due to the uneven absorption of the UV-light. The new model offers the possibility to optimize the treatment individually for every patient depending on their corneal thickness in terms of efficiency, saftey and treatment time.
Computer model for characterizing, screening, and optimizing electrolyte systems
Gering, Kevin L.
2015-06-15
Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.
Performance Optimization of NEMO Oceanic Model at High Resolution
NASA Astrophysics Data System (ADS)
Epicoco, Italo; Mocavero, Silvia; Aloisio, Giovanni
2014-05-01
The NEMO oceanic model is based on the Navier-Stokes equations along with a nonlinear equation of state, which couples the two active tracers (temperature and salinity) to the fluid velocity. The code is written in Fortan 90 and parallelized using MPI. The resolution of the global ocean models used today for climate change studies limits the prediction accuracy. To overcome this limit, a new high-resolution global model, based on NEMO, simulating at 1/16° and 100 vertical levels has been developed at CMCC. The model is computational and memory intensive, so it requires many resources to be run. An optimization activity is needed. The strategy requires a preliminary analysis to highlight scalability bottlenecks. It has been performed on a SandyBridge architecture at CMCC. An efficiency of 48% on 7K cores (the maximum available) has been achieved. The analysis has been also carried out at routine level, so that the improvement actions could be designed for the entire code or for the single kernel. The analysis highlighted for example a loss of performance due to the routine used to implement the north fold algorithm (i.e. handling the points at the north pole of the 3-poles Grids): indeed an optimization of the routine implementation is needed. The folding is achieved considering only the last 4 rows on the top of the global domain and by applying a rotation pivoting on the point in the middle. During the folding, the point on the top left is updated with the value of the point on bottom right and so on. The current version of the parallel algorithm is based on the domain decomposition. Each MPI process takes care of a block of points. Each process can update its points using values belonging to the symmetric process. In the current implementation, each received message is placed in a buffer with a number of elements equal to the total dimension of the global domain. Each process sweeps the entire buffer, but only a part of that computation is really useful for the
20nm CMP model calibration with optimized metrology data and CMP model applications
NASA Astrophysics Data System (ADS)
Katakamsetty, Ushasree; Koli, Dinesh; Yeo, Sky; Hui, Colin; Ghulghazaryan, Ruben; Aytuna, Burak; Wilson, Jeff
2015-03-01
Chemical Mechanical Polishing (CMP) is the essential process for planarization of wafer surface in semiconductor manufacturing. CMP process helps to produce smaller ICs with more electronic circuits improving chip speed and performance. CMP also helps to increase throughput and yield, which results in reduction of IC manufacturer's total production costs. CMP simulation model will help to early predict CMP manufacturing hotspots and minimize the CMP and CMP induced Lithography and Etch defects [2]. In the advanced process nodes, conventional dummy fill insertion for uniform density is not able to address all the CMP short-range, long-range, multi-layer stacking and other effects like pad conditioning, slurry selectivity, etc. In this paper, we present the flow for 20nm CMP modeling using Mentor Graphics CMP modeling tools to build a multilayer Cu-CMP model and study hotspots. We present the inputs required for good CMP model calibration, challenges faced with metrology collections and techniques to optimize the wafer cost. We showcase the CMP model validation results and the model applications to predict multilayer topography accumulation affects for hotspot detection. We provide the flow for early detection of CMP hotspots with Calibre CMPAnalyzer to improve Design-for-Manufacturability (DFM) robustness.
Modeling marine surface microplastic transport to assess optimal removal locations
NASA Astrophysics Data System (ADS)
Sherman, Peter; van Sebille, Erik
2016-01-01
Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the impact on ecosystems, using plankton growth as a proxy. The simulations show that the optimal removal locations are primarily located off the coast of China and in the Indonesian Archipelago for both scenarios. Our estimates show that 31% of the modeled microplastic mass can be removed by 2025 using 29 plastic collectors operating at a 45% capture efficiency from these locations, compared to only 17% when the 29 plastic collectors are moored in the North Pacific garbage patch, between Hawaii and California. The overlap of ocean surface microplastics and phytoplankton growth can be reduced by 46% at our proposed locations, while sinks in the North Pacific can only reduce the overlap by 14%. These results are an indication that oceanic plastic removal might be more effective in removing a greater microplastic mass and in reducing potential harm to marine life when closer to shore than inside the plastic accumulation zones in the centers of the gyres.
Optimization of the artificial urinary sphincter: modelling and experimental validation
NASA Astrophysics Data System (ADS)
Marti, Florian; Leippold, Thomas; John, Hubert; Blunschi, Nadine; Müller, Bert
2006-03-01
The artificial urinary sphincter should be long enough to prevent strangulation effects of the urethral tissue and short enough to avoid the improper dissection of the surrounding tissue. To optimize the sphincter length, the empirical three-parameter urethra compression model is proposed based on the mechanical properties of the urethra: wall pressure, tissue response rim force and sphincter periphery length. In vitro studies using explanted animal or human urethras and different artificial sphincters demonstrate its applicability. The pressure of the sphincter to close the urethra is shown to be a linear function of the bladder pressure. The force to close the urethra depends on the sphincter length linearly. Human urethras display the same dependences as the urethras of pig, dog, sheep and calf. Quantitatively, however, sow urethras resemble best the human ones. For the human urethras, the mean wall pressure corresponds to (-12.6 ± 0.9) cmH2O and (-8.7 ± 1.1) cmH2O, the rim length to (3.0 ± 0.3) mm and (5.1 ± 0.3) mm and the rim force to (60 ± 20) mN and (100 ± 20) mN for urethra opening and closing, respectively. Assuming an intravesical pressure of 40 cmH2O, and an external pressure on the urethra of 60 cmH2O, the model leads to the optimized sphincter length of (17.3 ± 3.8) mm.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Datta, Bithin
2011-04-01
Approximation surrogates are used to substitute the numerical simulation model within optimization algorithms in order to reduce the computational burden on the coupled simulation-optimization methodology. Practical utility of the surrogate-based simulation-optimization have been limited mainly due to the uncertainty in surrogate model simulations. We develop a surrogate-based coupled simulation-optimization methodology for deriving optimal extraction strategies for coastal aquifer management considering the predictive uncertainty of the surrogate model. Optimization models considering two conflicting objectives are solved using a multiobjective genetic algorithm. Objectives of maximizing the pumping from production wells and minimizing the barrier well pumping for hydraulic control of saltwater intrusion are considered. Density-dependent flow and transport simulation model FEMWATER is used to generate input-output patterns of groundwater extraction rates and resulting salinity levels. The nonparametric bootstrap method is used to generate different realizations of this data set. These realizations are used to train different surrogate models using genetic programming for predicting the salinity intrusion in coastal aquifers. The predictive uncertainty of these surrogate models is quantified and ensemble of surrogate models is used in the multiple-realization optimization model to derive the optimal extraction strategies. The multiple realizations refer to the salinity predictions using different surrogate models in the ensemble. Optimal solutions are obtained for different reliability levels of the surrogate models. The solutions are compared against the solutions obtained using a chance-constrained optimization formulation and single-surrogate-based model. The ensemble-based approach is found to provide reliable solutions for coastal aquifer management while retaining the advantage of surrogate models in reducing computational burden.
Optimal spatiotemporal reduced order modeling for nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
LaBryer, Allen
Proposed in this dissertation is a novel reduced order modeling (ROM) framework called optimal spatiotemporal reduced order modeling (OPSTROM) for nonlinear dynamical systems. The OPSTROM approach is a data-driven methodology for the synthesis of multiscale reduced order models (ROMs) which can be used to enhance the efficiency and reliability of under-resolved simulations for nonlinear dynamical systems. In the context of nonlinear continuum dynamics, the OPSTROM approach relies on the concept of embedding subgrid-scale models into the governing equations in order to account for the effects due to unresolved spatial and temporal scales. Traditional ROMs neglect these effects, whereas most other multiscale ROMs account for these effects in ways that are inconsistent with the underlying spatiotemporal statistical structure of the nonlinear dynamical system. The OPSTROM framework presented in this dissertation begins with a general system of partial differential equations, which are modified for an under-resolved simulation in space and time with an arbitrary discretization scheme. Basic filtering concepts are used to demonstrate the manner in which residual terms, representing subgrid-scale dynamics, arise with a coarse computational grid. Models for these residual terms are then developed by accounting for the underlying spatiotemporal statistical structure in a consistent manner. These subgrid-scale models are designed to provide closure by accounting for the dynamic interactions between spatiotemporal macroscales and microscales which are otherwise neglected in a ROM. For a given resolution, the predictions obtained with the modified system of equations are optimal (in a mean-square sense) as the subgrid-scale models are based upon principles of mean-square error minimization, conditional expectations and stochastic estimation. Methods are suggested for efficient model construction, appraisal, error measure, and implementation with a couple of well-known time
Constrained Multiobjective Optimization Algorithm Based on Immune System Model.
Qian, Shuqu; Ye, Yongqiang; Jiang, Bin; Wang, Jianhong
2016-09-01
An immune optimization algorithm, based on the model of biological immune system, is proposed to solve multiobjective optimization problems with multimodal nonlinear constraints. First, the initial population is divided into feasible nondominated population and infeasible/dominated population. The feasible nondominated individuals focus on exploring the nondominated front through clone and hypermutation based on a proposed affinity design approach, while the infeasible/dominated individuals are exploited and improved via the simulated binary crossover and polynomial mutation operations. And then, to accelerate the convergence of the proposed algorithm, a transformation technique is applied to the combined population of the above two offspring populations. Finally, a crowded-comparison strategy is used to create the next generation population. In numerical experiments, a series of benchmark constrained multiobjective optimization problems are considered to evaluate the performance of the proposed algorithm and it is also compared to several state-of-art algorithms in terms of the inverted generational distance and hypervolume indicators. The results indicate that the new method achieves competitive performance and even statistically significant better results than previous algorithms do on most of the benchmark suite. PMID:26285230
Optimization of Forward Wave Modeling on Contemporary HPC Architectures
Krueger, Jens; Micikevicius, Paulius; Williams, Samuel
2012-07-20
Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization for TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.
Multi-level systems modeling and optimization for novel aircraft
NASA Astrophysics Data System (ADS)
Subramanian, Shreyas Vathul
This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission
Thermal modeling and optimization of a thermally matched energy harvester
NASA Astrophysics Data System (ADS)
Boughaleb, J.; Arnaud, A.; Cottinet, P. J.; Monfray, S.; Gelenne, P.; Kermel, P.; Quenard, S.; Boeuf, F.; Guyomar, D.; Skotnicki, T.
2015-08-01
The interest in energy harvesting devices has grown with the development of wireless sensors requiring small amounts of energy to function. The present article addresses the thermal investigation of a coupled piezoelectric and bimetal-based heat engine. The thermal energy harvester in question converts low-grade heat flows into electrical charges by achieving a two-step conversion mechanism for which the key point is the ability to maintain a significant thermal gradient without any heat sink. Many studies have previously focused on the electrical properties of this innovative device for energy harvesting but until now, no thermal modeling has been able to describe the device specificities or improve its thermal performances. The research reported in this paper focuses on the modeling of the harvester using an equivalent electrical circuit approach. It is shown that the knowledge of the thermal properties inside the device and a good comprehension of its heat exchange with the surrounding play a key role in the optimization procedure. To validate the thermal modeling, finite element analyses as well as experimental measurements on a hot plate were carried out and the techniques were compared. The proposed model provided a practical guideline for improving the generator design to obtain a thermally matched energy harvester that can function over a wide range of hot source temperatures for the same bimetal. A direct application of this study has been implemented on scaled structures to maintain an important temperature difference between the cold surface and the hot reservoir. Using the equations of the thermal model, predictions of the thermal properties were evaluated depending on the scaling factor and solutions for future thermal improvements are presented.
Optimizing nanomedicine pharmacokinetics using physiologically based pharmacokinetics modelling
Moss, Darren Michael; Siccardi, Marco
2014-01-01
The delivery of therapeutic agents is characterized by numerous challenges including poor absorption, low penetration in target tissues and non-specific dissemination in organs, leading to toxicity or poor drug exposure. Several nanomedicine strategies have emerged as an advanced approach to enhance drug delivery and improve the treatment of several diseases. Numerous processes mediate the pharmacokinetics of nanoformulations, with the absorption, distribution, metabolism and elimination (ADME) being poorly understood and often differing substantially from traditional formulations. Understanding how nanoformulation composition and physicochemical properties influence drug distribution in the human body is of central importance when developing future treatment strategies. A helpful pharmacological tool to simulate the distribution of nanoformulations is represented by physiologically based pharmacokinetics (PBPK) modelling, which integrates system data describing a population of interest with drug/nanoparticle in vitro data through a mathematical description of ADME. The application of PBPK models for nanomedicine is in its infancy and characterized by several challenges. The integration of property–distribution relationships in PBPK models may benefit nanomedicine research, giving opportunities for innovative development of nanotechnologies. PBPK modelling has the potential to improve our understanding of the mechanisms underpinning nanoformulation disposition and allow for more rapid and accurate determination of their kinetics. This review provides an overview of the current knowledge of nanomedicine distribution and the use of PBPK modelling in the characterization of nanoformulations with optimal pharmacokinetics. Linked Articles This article is part of a themed section on Nanomedicine. To view the other articles in this section visit http://dx.doi.org/10.1111/bph.2014.171.issue-17 PMID:24467481
Optimal hemodynamic response model for functional near-infrared spectroscopy
Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668
Modeling and optimization of energy storage system for microgrid
NASA Astrophysics Data System (ADS)
Qiu, Xin
The vanadium redox flow battery (VRB) is well suited for the applications of microgrid and renewable energy. This thesis will have a practical analysis of the battery itself and its application in microgrid systems. The first paper analyzes the VRB use in a microgrid system. The first part of the paper develops a reduced order circuit model of the VRB and analyzes its experimental performance efficiency during deployment. The statistical methods and neural network approximation are used to estimate the system parameters. The second part of the paper addresses the implementation issues of the VRB application in a photovoltaic-based microgrid system. A new dc-dc converter was proposed to provide improved charging performance. The paper was published on IEEE Transactions on Smart Grid, Vol. 5, No. 4, July 2014. The second paper studies VRB use within a microgrid system from a practical perspective. A reduced order circuit model of the VRB is introduced that includes the losses from the balance of plant including system and environmental controls. The proposed model includes the circulation pumps and the HVAC system that regulates the environment of the VRB enclosure. In this paper, the VRB model is extended to include the ESS environmental controls to provide a model that provides a more realistic efficiency profile. The paper was submitted to IEEE Transactions on Sustainable Energy. Third paper discussed the optimal control strategy when VRB works with other type of battery in a microgird system. The work in first paper is extended. A high level control strategy is developed to coordinate a lead acid battery and a VRB with reinforcement learning. The paper is to be submitted to IEEE Transactions on Smart Grid.
A canopy-type similarity model for wind farm optimization
NASA Astrophysics Data System (ADS)
Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando
2013-04-01
The atmospheric boundary layer (ABL) flow through and over wind farms has been found to be similar to canopy-type flows, with characteristic flow development and shear penetration length scales (Markfort et al., 2012). Wind farms capture momentum from the ABL both at the leading edge and from above. We examine this further with an analytical canopy-type model. Within the flow development region, momentum is advected into the wind farm and wake turbulence draws excess momentum in from between turbines. This spatial heterogeneity of momentum within the wind farm is characterized by large dispersive momentum fluxes. Once the flow within the farm is developed, the area-averaged velocity profile exhibits a characteristic inflection point near the top of the wind farm, similar to that of canopy-type flows. The inflected velocity profile is associated with the presence of a dominant characteristic turbulence scale, which may be responsible for a significant portion of the vertical momentum flux. Prediction of this scale is useful for determining the amount of available power for harvesting. The new model is tested with results from wind tunnel experiments, which were conducted to characterize the turbulent flow in and above model wind farms in aligned and staggered configurations. The model is useful for representing wind farms in regional scale models, for the optimization of wind farms considering wind turbine spacing and layout configuration, and for assessing the impacts of upwind wind farms on nearby wind resources. Markfort CD, W Zhang and F Porté-Agel. 2012. Turbulent flow and scalar transport through and over aligned and staggered wind farms. Journal of Turbulence. 13(1) N33: 1-36. doi:10.1080/14685248.2012.709635.
Pulsed pumping process optimization using a potential flow model.
Tenney, C M; Lastoskie, C M
2007-08-15
A computational model is applied to the optimization of pulsed pumping systems for efficient in situ remediation of groundwater contaminants. In the pulsed pumping mode of operation, periodic rather than continuous pumping is used. During the pump-off or trapping phase, natural gradient flow transports contaminated groundwater into a treatment zone surrounding a line of injection and extraction wells that transect the contaminant plume. Prior to breakthrough of the contaminated water from the treatment zone, the wells are activated and the pump-on or treatment phase ensues, wherein extracted water is augmented to stimulate pollutant degradation and recirculated for a sufficient period of time to achieve mandated levels of contaminant removal. An important design consideration in pulsed pumping groundwater remediation systems is the pumping schedule adopted to best minimize operational costs for the well grid while still satisfying treatment requirements. Using an analytic two-dimensional potential flow model, optimal pumping frequencies and pumping event durations have been investigated for a set of model aquifer-well systems with different well spacings and well-line lengths, and varying aquifer physical properties. The results for homogeneous systems with greater than five wells and moderate to high pumping rates are reduced to a single, dimensionless correlation. Results for heterogeneous systems are presented graphically in terms of dimensionless parameters to serve as an efficient tool for initial design and selection of the pumping regimen best suited for pulsed pumping operation for a particular well configuration and extraction rate. In the absence of significant retardation or degradation during the pump-off phase, average pumping rates for pulsed operation were found to be greater than the continuous pumping rate required to prevent contaminant breakthrough. PMID:17350717
NASA Technical Reports Server (NTRS)
Sherry, Lance; Ferguson, John; Hoffman, Karla; Donohue, George; Beradino, Frank
2012-01-01
This report describes the Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM) that is designed to provide insights into airline decision-making with regards to markets served, schedule of flights on these markets, the type of aircraft assigned to each scheduled flight, load factors, airfares, and airline profits. The main inputs to the model are hedged fuel prices, airport capacity limits, and candidate markets. Embedded in the model are aircraft performance and associated cost factors, and willingness-to-pay (i.e. demand vs. airfare curves). Case studies demonstrate the application of the model for analysis of the effects of increased capacity and changes in operating costs (e.g. fuel prices). Although there are differences between airports (due to differences in the magnitude of travel demand and sensitivity to airfare), the system is more sensitive to changes in fuel prices than capacity. Further, the benefits of modernization in the form of increased capacity could be undermined by increases in hedged fuel prices
Optimization modeling to maximize population access to comprehensive stroke centers
Branas, Charles C.; Kasner, Scott E.; Wolff, Catherine; Williams, Justin C.; Albright, Karen C.; Carr, Brendan G.
2015-01-01
Objective: The location of comprehensive stroke centers (CSCs) is critical to ensuring rapid access to acute stroke therapies; we conducted a population-level virtual trial simulating change in access to CSCs using optimization modeling to selectively convert primary stroke centers (PSCs) to CSCs. Methods: Up to 20 certified PSCs per state were selected for conversion to maximize the population with 60-minute CSC access by ground and air. Access was compared across states based on region and the presence of state-level emergency medical service policies preferentially routing patients to stroke centers. Results: In 2010, there were 811 Joint Commission PSCs and 0 CSCs in the United States. Of the US population, 65.8% had 60-minute ground access to PSCs. After adding up to 20 optimally located CSCs per state, 63.1% of the US population had 60-minute ground access and 86.0% had 60-minute ground/air access to a CSC. Across states, median CSC access was 55.7% by ground (interquartile range 35.7%–71.5%) and 85.3% by ground/air (interquartile range 59.8%–92.1%). Ground access was lower in Stroke Belt states compared with non–Stroke Belt states (32.0% vs 58.6%, p = 0.02) and lower in states without emergency medical service routing policies (52.7% vs 68.3%, p = 0.04). Conclusion: Optimal system simulation can be used to develop efficient care systems that maximize accessibility. Under optimal conditions, a large proportion of the US population will be unable to access a CSC within 60 minutes. PMID:25740858
Computer model for characterizing, screening, and optimizing electrolyte systems
Energy Science and Technology Software Center (ESTSC)
2015-06-15
Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced modelsmore » are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less
Modeling the minimum enzymatic requirements for optimal cellulose conversion
NASA Astrophysics Data System (ADS)
den Haan, R.; van Zyl, J. M.; Harms, T. M.; van Zyl, W. H.
2013-06-01
Hydrolysis of cellulose is achieved by the synergistic action of endoglucanases, exoglucanases and β-glucosidases. Most cellulolytic microorganisms produce a varied array of these enzymes and the relative roles of the components are not easily defined or quantified. In this study we have used partially purified cellulases produced heterologously in the yeast Saccharomyces cerevisiae to increase our understanding of the roles of some of these components. CBH1 (Cel7), CBH2 (Cel6) and EG2 (Cel5) were separately produced in recombinant yeast strains, allowing their isolation free of any contaminating cellulolytic activity. Binary and ternary mixtures of the enzymes at loadings ranging between 3 and 100 mg g-1 Avicel allowed us to illustrate the relative roles of the enzymes and their levels of synergy. A mathematical model was created to simulate the interactions of these enzymes on crystalline cellulose, under both isolated and synergistic conditions. Laboratory results from the various mixtures at a range of loadings of recombinant enzymes allowed refinement of the mathematical model. The model can further be used to predict the optimal synergistic mixes of the enzymes. This information can subsequently be applied to help to determine the minimum protein requirement for complete hydrolysis of cellulose. Such knowledge will be greatly informative for the design of better enzymatic cocktails or processing organisms for the conversion of cellulosic biomass to commodity products.
Optimization of precipitation inputs for SWAT modeling in mountainous catchment
NASA Astrophysics Data System (ADS)
Tuo, Ye; Chiogna, Gabriele; Disse, Markus
2016-04-01
Precipitation is often the most important input data in hydrological models when simulating streamflow in mountainous catchment. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauging station which is nearest to the centroid of each subcatchment, eventually corrected using the band elevation method. This leads in general to inaccurate subcatchment precipitation representation, which results in unreliable simulation results in mountainous catchment. To investigate the impact of the precipitation inputs and consider the high spatial and temporal variability of precipitation, we first interpolated 21 years (1990-2010) of daily measured data using the Inverse Distance Weighting (IDW) method. Averaged IDW daily values have been calculated at the subcatchment scale to be further supplied as optimized precipitation inputs for SWAT. Both datasets (Measured data and IDW data) are applied to three Alpine subcatchments of the Adige catchment (North-eastern Italy, 12100 km2) as precipitation inputs. Based on the calibration and validation results, model performances are evaluated according to the Nash Sutchliffe Efficiency (NSE) and Coefficient of Determination (R2). For all three subcatchments, the simulation results with IDW inputs are better than the original method which uses measured inputs from the nearest station. This suggests that IDW method could improve the model performance in Alpine catchments to some extent. By taking into account and weighting the distance between precipitation records, IDW supplies more accurate precipitation inputs for each individual Alpine subcatchment, which would as a whole lead to an improved description of the hydrological behavior of the entire Adige catchment.
An optimization model to agroindustrial sector in antioquia (Colombia, South America)
NASA Astrophysics Data System (ADS)
Fernandez, J.
2015-06-01
This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.
Yang, Guoxiang; Best, Elly P H
2015-09-15
Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. PMID:26188990
Test cell modeling and optimization for FPD-II
Haney, S.W.; Fenstermacher, M.E.
1985-04-10
The Fusion Power Demonstration, Configuration II (FPD-II), will ba a DT burning tandem mirror facility with thermal barriers, designed as the next step engineering test reactor (ETR) to follow the tandem mirror ignition test machines. Current plans call for FPD-II to be a multi-purpose device. For approximately the first half of its lifetime, it will operate as a high-Q ignition machine designed to reach or exceed engineering break-even and to demonstrate the technological feasibility of tandem mirror fusion. The second half of its operation will focus on the evaluation of candidate reactor blanket designs using a neutral beam driven test cell inserted at the midplane of the 90 m long cell. This machine called FPD-II+T, uses an insert configuration similar to that used in the MFTF-..cap alpha..+T study. The modeling and optimization of FPD-II+T are the topic of the present paper.
Simulation and optimization models for emergency medical systems planning.
Bettinelli, Andrea; Cordone, Roberto; Ficarelli, Federico; Righini, Giovanni
2014-01-01
The authors address strategic planning problems for emergency medical systems (EMS). In particular, the three following critical decisions are considered: i) how many ambulances to deploy in a given territory at any given point in time, to meet the forecasted demand, yielding an appropriate response time; ii) when ambulances should be used for serving nonurgent requests and when they should better be kept idle for possible incoming urgent requests; iii) how to define an optimal mix of contracts for renting ambulances from private associations to meet the forecasted demand at minimum cost. In particular, analytical models for decision support, based on queuing theory, discrete-event simulation, and integer linear programming were presented. Computational experiments have been done on real data from the city of Milan, Italy. PMID:25069023
Approximate Optimal Control as a Model for Motor Learning
ERIC Educational Resources Information Center
Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.
2005-01-01
Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…
Inverse modeling of FIB milling by dose profile optimization
NASA Astrophysics Data System (ADS)
Lindsey, S.; Waid, S.; Hobler, G.; Wanzenböck, H. D.; Bertagnolli, E.
2014-12-01
FIB technologies possess a unique ability to form topographies that are difficult or impossible to generate with binary etching through typical photo-lithography. The ability to arbitrarily vary the spatial dose distribution and therefore the amount of milling opens possibilities for the production of a wide range of functional structures with applications in biology, chemistry, and optics. However in practice, the realization of these goals is made difficult by the angular dependence of the sputtering yield and redeposition effects that vary as the topography evolves. An inverse modeling algorithm that optimizes dose profiles, defined as the superposition of time invariant pixel dose profiles (determined from the beam parameters and pixel dwell times), is presented. The response of the target to a set of pixel dwell times in modeled by numerical continuum simulations utilizing 1st and 2nd order sputtering and redeposition, the resulting surfaces are evaluated with respect to a target topography in an error minimization routine. Two algorithms for the parameterization of pixel dwell times are presented, a direct pixel dwell time method, and an abstracted method that uses a refineable piecewise linear cage function to generate pixel dwell times from a minimal number of parameters. The cage function method demonstrates great flexibility and efficiency as compared to the direct fitting method with performance enhancements exceeding ∼10× as compared to direct fitting for medium to large simulation sets. Furthermore, the refineable nature of the cage function enables solutions to adapt to the desired target function. The optimization algorithm, although working with stationary dose profiles, is demonstrated to be applicable also outside the quasi-static approximation. Experimental data confirms the viability of the solutions for 5 × 7 μm deep lens like structures defined by 90 pixel dwell times.
Source term identification in atmospheric modelling via sparse optimization
NASA Astrophysics Data System (ADS)
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the
Multi-model Simulation for Optimal Control of Aeroacoustics.
Collis, Samuel Scott; Chen, Guoquan
2005-05-01
Flow-generated noise, especially rotorcraft noise has been a serious concern for bothcommercial and military applications. A particular important noise source for rotor-craft is Blade-Vortex-Interaction (BVI)noise, a high amplitude, impulsive sound thatoften dominates other rotorcraft noise sources. Usually BVI noise is caused by theunsteady flow changes around various rotor blades due to interactions with vorticespreviously shed by the blades. A promising approach for reducing the BVI noise isto use on-blade controls, such as suction/blowing, micro-flaps/jets, and smart struc-tures. Because the design and implementation of such experiments to evaluate suchsystems are very expensive, efficient computational tools coupled with optimal con-trol systems are required to explore the relevant physics and evaluate the feasibilityof using various micro-fluidic devices before committing to hardware.In this thesis the research is to formulate and implement efficient computationaltools for the development and study of optimal control and design strategies for com-plex flow and acoustic systems with emphasis on rotorcraft applications, especiallyBVI noise control problem. The main purpose of aeroacoustic computations is todetermine the sound intensity and directivity far away from the noise source. How-ever, the computational cost of using a high-fidelity flow-physics model across thefull domain is usually prohibitive and itmight also be less accurate because of thenumerical diffusion and other problems. Taking advantage of the multi-physics andmulti-scale structure of this aeroacoustic problem, we develop a multi-model, multi-domain (near-field/far-field) method based on a discontinuous Galerkin discretiza-tion. In this approach the coupling of multi-domains and multi-models is achievedby weakly enforcing continuity of normal fluxes across a coupling surface. For ourinterested aeroacoustics control problem, the adjoint equations that determine thesensitivity of the cost
Optimal Control of Distributed Energy Resources using Model Predictive Control
Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen
2012-07-22
In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.
Managing and learning with multiple models: Objectives and optimization algorithms
Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.
2011-01-01
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.
Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.
2005-09-01
Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.
3D modeling and optimization of the ITER ICRH antenna
NASA Astrophysics Data System (ADS)
Louche, F.; Dumortier, P.; Durodié, F.; Messiaen, A.; Maggiora, R.; Milanesio, D.
2011-12-01
The prediction of the coupling properties of the ITER ICRH antenna necessitates the accurate evaluation of the resistance and reactance matrices. The latter are mostly dependent on the geometry of the array and therefore a model as accurate as possible is needed to precisely compute these matrices. Furthermore simulations have so far neglected the poloidal and toroidal profile of the plasma, and it is expected that the loading by individual straps will vary significantly due to varying strap-plasma distance. To take this curvature into account, some modifications of the alignment of the straps with respect to the toroidal direction are proposed. It is shown with CST Microwave Studio® [1] that considering two segments in the toroidal direction, i.e. a "V-shaped" toroidal antenna, is sufficient. A new CATIA model including this segmentation has been drawn and imported into both MWS and TOPICA [2] codes. Simulations show a good agreement of the impedance matrices in vacuum. Various modifications of the geometry are proposed in order to further optimize the coupling. In particular we study the effect of the strap box parameters and the recess of the vertical septa.
Sandfish numerical model reveals optimal swimming in sand
NASA Astrophysics Data System (ADS)
Maladen, Ryan; Ding, Yang; Kamor, Adam; Slatton, Andrew; Goldman, Daniel
2009-11-01
Motivated by experiment and theory examining the undulatory swimming of the sandfish lizard within granular media footnotetextMaladen et. al, Science, 325, 314, 2009, we study a numerical model of the sandfish as it swims within a validated soft sphere Molecular Dynamics granular media simulation. We hypothesize that features of its morphology and undulatory kinematics, and the granular media contribute to effective sand swimming. Our results agree with a resistive force model of the sandfish and show that speed and transport cost are optimized at a ratio of wave amplitude to wavelength of 0.2, irrespective of media properties and preparation. At this ratio, the entry of the animal into the media is fastest at an angle of 20^o, close to the angle of repose. We also find that the sandfish cross-sectional body shape reduces motion induced buoyancy within the granular media and that wave efficiency is sensitive to body-particle friction but independent of particle-particle friction.
Modelling Optimal Control of Cholera in Communities Linked by Migration.
Njagarah, J B H; Nyabadza, F
2015-01-01
A mathematical model for the dynamics of cholera transmission with permissible controls between two connected communities is developed and analysed. The dynamics of the disease in the adjacent communities are assumed to be similar, with the main differences only reflected in the transmission and disease related parameters. This assumption is based on the fact that adjacent communities often have different living conditions and movement is inclined toward the community with better living conditions. Community specific reproduction numbers are given assuming movement of those susceptible, infected, and recovered, between communities. We carry out sensitivity analysis of the model parameters using the Latin Hypercube Sampling scheme to ascertain the degree of effect the parameters and controls have on progression of the infection. Using principles from optimal control theory, a temporal relationship between the distribution of controls and severity of the infection is ascertained. Our results indicate that implementation of controls such as proper hygiene, sanitation, and vaccination across both affected communities is likely to annihilate the infection within half the time it would take through self-limitation. In addition, although an infection may still break out in the presence of controls, it may be up to 8 times less devastating when compared with the case when no controls are in place. PMID:26246850
Calibration Modeling Methodology to Optimize Performance for Low Range Applications
NASA Technical Reports Server (NTRS)
McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.
2010-01-01
Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.
Modelling Optimal Control of Cholera in Communities Linked by Migration
Njagarah, J. B. H.; Nyabadza, F.
2015-01-01
A mathematical model for the dynamics of cholera transmission with permissible controls between two connected communities is developed and analysed. The dynamics of the disease in the adjacent communities are assumed to be similar, with the main differences only reflected in the transmission and disease related parameters. This assumption is based on the fact that adjacent communities often have different living conditions and movement is inclined toward the community with better living conditions. Community specific reproduction numbers are given assuming movement of those susceptible, infected, and recovered, between communities. We carry out sensitivity analysis of the model parameters using the Latin Hypercube Sampling scheme to ascertain the degree of effect the parameters and controls have on progression of the infection. Using principles from optimal control theory, a temporal relationship between the distribution of controls and severity of the infection is ascertained. Our results indicate that implementation of controls such as proper hygiene, sanitation, and vaccination across both affected communities is likely to annihilate the infection within half the time it would take through self-limitation. In addition, although an infection may still break out in the presence of controls, it may be up to 8 times less devastating when compared with the case when no controls are in place. PMID:26246850
Using the NOABL flow model and mathematical optimization as a micrositing tool
Wegley, H.L.; Barnard, J.C.
1986-11-01
This report describes the use of an improved mass-consistent model that is intended for diagnosing wind fields in complex terrain. The model was developed by merging an existing mass-consistent model, the NOABL model, with an optimization procedure. The optimization allows objective calculation of important model input parameters that previously had been supplied through guesswork; in this manner, the accuracy of the calculated winds has been greatly increased. The report covers such topics as the software structure of the model, assembling an input file, processing the model's output, and certain cautions about the model's operation. The use of the model is illustrated by a test case.
An optimized TOPS+ comparison method for enhanced TOPS models
2010-01-01
Background Although methods based on highly abstract descriptions of protein structures, such as VAST and TOPS, can perform very fast protein structure comparison, the results can lack a high degree of biological significance. Previously we have discussed the basic mechanisms of our novel method for structure comparison based on our TOPS+ model (Topological descriptions of Protein Structures Enhanced with Ligand Information). In this paper we show how these results can be significantly improved using parameter optimization, and we call the resulting optimised TOPS+ method as advanced TOPS+ comparison method i.e. advTOPS+. Results We have developed a TOPS+ string model as an improvement to the TOPS [1-3] graph model by considering loops as secondary structure elements (SSEs) in addition to helices and strands, representing ligands as first class objects, and describing interactions between SSEs, and SSEs and ligands, by incoming and outgoing arcs, annotating SSEs with the interaction direction and type. Benchmarking results of an all-against-all pairwise comparison using a large dataset of 2,620 non-redundant structures from the PDB40 dataset [4] demonstrate the biological significance, in terms of SCOP classification at the superfamily level, of our TOPS+ comparison method. Conclusions Our advanced TOPS+ comparison shows better performance on the PDB40 dataset [4] compared to our basic TOPS+ method, giving 90% accuracy for SCOP alpha+beta; a 6% increase in accuracy compared to the TOPS and basic TOPS+ methods. It also outperforms the TOPS, basic TOPS+ and SSAP comparison methods on the Chew-Kedem dataset [5], achieving 98% accuracy. Software Availability The TOPS+ comparison server is available at http://balabio.dcs.gla.ac.uk/mallika/WebTOPS/. PMID:20236520
Essays on Applied Resource Economics Using Bioeconomic Optimization Models
NASA Astrophysics Data System (ADS)
Affuso, Ermanno
With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.
Clean wing airframe noise modeling for multidisciplinary design and optimization
NASA Astrophysics Data System (ADS)
Hosder, Serhat
A new noise metric has been developed that may be used for optimization problems involving aerodynamic noise from a clean wing. The modeling approach uses a classical trailing edge noise theory as the starting point. The final form of the noise metric includes characteristic velocity and length scales that are obtained from three-dimensional, steady, RANS simulations with a two equation k-o turbulence model. The noise metric is not the absolute value of the noise intensity, but an accurate relative noise measure as shown in the validation studies. One of the unique features of the new noise metric is the modeling of the length scale, which is directly related to the turbulent structure of the flow at the trailing edge. The proposed noise metric model has been formulated so that it can capture the effect of different design variables on the clean wing airframe noise such as the aircraft speed, lift coefficient, and wing geometry. It can also capture three dimensional effects which become important at high lift coefficients, since the characteristic velocity and the length scales are allowed to vary along the span of the wing. Noise metric validation was performed with seven test cases that were selected from a two-dimensional NACA 0012 experimental database. The agreement between the experiment and the predictions obtained with the new noise metric was very good at various speeds, angles of attack, and Reynolds Number, which showed that the noise metric is capable of capturing the variations in the trailing edge noise as a relative noise measure when different flow conditions and parameters are changed. Parametric studies were performed to investigate the effect of different design variables on the noise metric. Two-dimensional parametric studies were done using two symmetric NACA four-digit airfoils (NACA 0012 and NACA 0009) and two supercritical (SC(2)-0710 and SC(2)-0714) airfoils. The three-dimensional studies were performed with two versions of a conventional
A model based technique for the design of flight directors. [optimal control models
NASA Technical Reports Server (NTRS)
Levison, W. H.
1973-01-01
A new technique for designing flight directors is discussed. This technique uses the optimal-control pilot/vehicle model to determine the appropriate control strategy. The dynamics of this control strategy are then incorporated into the director control laws, thereby enabling the pilot to operate at a significantly lower workload. A preliminary design of a control director for maintaining a STOL vehicle on the approach path in the presence of random air turbulence is evaluated. By selecting model parameters in terms of allowable path deviations and pilot workload levels, a set of director laws is achieved which allows improved system performance at reduced workload levels. The pilot acts essentially as a proportional controller with regard to the director signals, and control motions are compatible with those appropriate to status-only displays.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults
NASA Astrophysics Data System (ADS)
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2015-11-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults
NASA Astrophysics Data System (ADS)
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models
NASA Astrophysics Data System (ADS)
Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.
2012-12-01
The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).
Dynamic Modeling, Model-Based Control, and Optimization of Solid Oxide Fuel Cells
NASA Astrophysics Data System (ADS)
Spivey, Benjamin James
2011-07-01
Solid oxide fuel cells are a promising option for distributed stationary power generation that offers efficiencies ranging from 50% in stand-alone applications to greater than 80% in cogeneration. To advance SOFC technology for widespread market penetration, the SOFC should demonstrate improved cell lifetime and load-following capability. This work seeks to improve lifetime through dynamic analysis of critical lifetime variables and advanced control algorithms that permit load-following while remaining in a safe operating zone based on stress analysis. Control algorithms typically have addressed SOFC lifetime operability objectives using unconstrained, single-input-single-output control algorithms that minimize thermal transients. Existing SOFC controls research has not considered maximum radial thermal gradients or limits on absolute temperatures in the SOFC. In particular, as stress analysis demonstrates, the minimum cell temperature is the primary thermal stress driver in tubular SOFCs. This dissertation presents a dynamic, quasi-two-dimensional model for a high-temperature tubular SOFC combined with ejector and prereformer models. The model captures dynamics of critical thermal stress drivers and is used as the physical plant for closed-loop control simulations. A constrained, MIMO model predictive control algorithm is developed and applied to control the SOFC. Closed-loop control simulation results demonstrate effective load-following, constraint satisfaction for critical lifetime variables, and disturbance rejection. Nonlinear programming is applied to find the optimal SOFC size and steady-state operating conditions to minimize total system costs.
Parameter optimization method for the water quality dynamic model based on data-driven theory.
Liang, Shuxiu; Han, Songlin; Sun, Zhaochen
2015-09-15
Parameter optimization is important for developing a water quality dynamic model. In this study, we applied data-driven method to select and optimize parameters for a complex three-dimensional water quality model. First, a data-driven model was developed to train the response relationship between phytoplankton and environmental factors based on the measured data. Second, an eight-variable water quality dynamic model was established and coupled to a physical model. Parameter sensitivity analysis was investigated by changing parameter values individually in an assigned range. The above results served as guidelines for the control parameter selection and the simulated result verification. Finally, using the data-driven model to approximate the computational water quality model, we employed the Particle Swarm Optimization (PSO) algorithm to optimize the control parameters. The optimization routines and results were analyzed and discussed based on the establishment of the water quality model in Xiangshan Bay (XSB). PMID:26277602
NASA Astrophysics Data System (ADS)
Siade, A. J.; Prommer, H.; Welter, D.
2014-12-01
Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site
Biolayer modeling and optimization for the SPARROW biosensor
NASA Astrophysics Data System (ADS)
Feng, Ke
2007-12-01
Biosensor direct detection of molecular binding events is of significant interest in applications from molecular screening for cancer drug design to bioagent detection for homeland security and defense. The Stacked Planar Affinity Regulated Resonant Optical Waveguide (SPARROW) structure based on coupled waveguides was recently developed to achieve increased sensitivity within a fieldable biosensor device configuration. Under ideal operating conditions, modification of the effective propagation constant of the structure's sensing waveguide through selective attachment of specific targets to probes on the waveguide surface results in a change in the coupling characteristics of the guide over a specifically designed interaction length with the analyte. Monitoring the relative power in each waveguide after interaction enables 'recognition' of those targets which have selectively bound to the surface. However, fabrication tolerances, waveguide interface roughness, biolayer surface roughness and biolayer partial coverage have an effect on biosensor behavior and achievable limit of detection (LOD). In addition to these influences which play a role in device optimization, the influence of the spatially random surface loading of molecular binding events has to be considered, especially for low surface coverage. In this dissertation an analytic model is established for the SPARROW biosensor which accounts for these nonidealities with which the design of the biosensor can be guided and optimized. For the idealized case of uniform waveguide transducer layers and biolayer, both theoretical simulation (analytical expression) and computer simulation (numerical calculation) are completed. For the nonideal case of an inhomogeneous transducer with nonideal waveguide and biolayer surfaces, device output power is affected by such physical influences as surface scattering, coupling length, absorption, and percent coverage of binding events. Using grating and perturbation techniques we
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
Optimization of GM(1,1) power model
NASA Astrophysics Data System (ADS)
Luo, Dang; Sun, Yu-ling; Song, Bo
2013-10-01
GM (1,1) power model is the expansion of traditional GM (1,1) model and Grey Verhulst model. Compared with the traditional models, GM (1,1) power model has the following advantage: The power exponent in the model which best matches the actual data values can be found by certain technology. So, GM (1,1) power model can reflect nonlinear features of the data, simulate and forecast with high accuracy. It's very important to determine the best power exponent during the modeling process. In this paper, according to the GM(1,1) power model of albino equation is Bernoulli equation, through variable substitution, turning it into the GM(1,1) model of the linear albino equation form, and then through the grey differential equation properly built, established GM(1,1) power model, and parameters with pattern search method solution. Finally, we illustrate the effectiveness of the new methods with the example of simulating and forecasting the promotion rates from senior secondary schools to higher education in China.
Bentley, Jason; Sloan, Charlotte; Kawajiri, Yoshiaki
2013-03-01
This work demonstrates a systematic prediction-correction (PC) method for simultaneously modeling and optimizing nonlinear simulated moving bed (SMB) chromatography. The PC method uses model-based optimization, SMB startup data, isotherm model selection, and parameter estimation to iteratively refine model parameters and find optimal operating conditions in a matter of hours to ensure high purity constraints and achieve optimal productivity. The PC algorithm proceeds until the SMB process is optimized without manual tuning. In case studies, it is shown that a nonlinear isotherm model and parameter values are determined reliably using SMB startup data. In one case study, a nonlinear SMB system is optimized after only two changes of operating conditions following the PC algorithm. The refined isotherm models are validated by frontal analysis and perturbation analysis. PMID:23380364
Statistical significance across multiple optimization models for community partition
NASA Astrophysics Data System (ADS)
Li, Ju; Li, Hui-Jia; Mao, He-Jin; Chen, Junhua
2016-05-01
The study of community structure is an important problem in a wide range of applications, which can help us understand the real network system deeply. However, due to the existence of random factors and error edges in real networks, how to measure the significance of community structure efficiently is a crucial question. In this paper, we present a novel statistical framework computing the significance of community structure across multiple optimization methods. Different from the universal approaches, we calculate the similarity between a given node and its leader and employ the distribution of link tightness to derive the significance score, instead of a direct comparison to a randomized model. Based on the distribution of community tightness, a new “p-value” form significance measure is proposed for community structure analysis. Specially, the well-known approaches and their corresponding quality functions are unified to a novel general formulation, which facilitates in providing a detailed comparison across them. To determine the position of leaders and their corresponding followers, an efficient algorithm is proposed based on the spectral theory. Finally, we apply the significance analysis to some famous benchmark networks and the good performance verified the effectiveness and efficiency of our framework.
The Optimal Licensing Contract in a Differentiated Stackelberg Model
Hong, Xianpei; Yang, Lijun; Zhang, Huaige; Zhao, Dan
2014-01-01
This paper extends the work of Wang (2002) by considering a differentiated Stackelberg model, when the leader firm is an inside innovator and licenses its new technology by three options, that is, fixed-fee licensing, royalty licensing, and two-part tariff licensing. The main contributions and conclusions of this paper are threefold. First of all, this paper derives a very different result from Wang (2002). We show that, with a nondrastic innovation, royalty licensing is always better than fixed-fee licensing for the innovator; with a drastic innovation, royalty licensing is superior to fixed-fee licensing for small values of substitution coefficient d; however when d becomes closer to 1, neither fee nor royalty licensing will occur. Secondly, this paper shows that the innovator is always better off in case of two-part tariff licensing than fixed-fee licensing no matter what the innovation size is. Thirdly, the innovator always prefers to license its nondrastic innovation by means of a two-part tariff instead of licensing by means of a royalty; however, with a drastic innovation, the optimal licensing strategy can be either a two-part tariff or a royalty, depending upon the differentiation of the goods. PMID:24683342
Optimal SCR Control Using Data-Driven Models
Stevens, Andrew J.; Sun, Yannan; Lian, Jianming; Devarakonda, Maruthi N.; Parker, Gordon
2013-04-16
We present an optimal control solution for the urea injection for a heavy-duty diesel (HDD) selective catalytic reduction (SCR). The approach taken here is useful beyond SCR and could be applied to any system where a control strategy is desired and input-output data is available. For example, the strategy could also be used for the diesel oxidation catalyst (DOC) system. In this paper, we identify and validate a one-step ahead Kalman state-space estimator for downstream NOx using the bench reactor data of an SCR core sample. The test data was acquired using a 2010 Cummins 6.7L ISB production engine with a 2010 Cummins production aftertreatment system. We used a surrogate HDD federal test procedure (FTP), developed at Michigan Technological University (MTU), which simulates the representative transients of the standard FTP cycle, but has less engine speed/load points. The identified state-space model is then used to develop a tunable cost function that simultaneously minimizes NOx emissions and urea usage. The cost function is quadratic and univariate, thus the minimum can be computed analytically. We show the performance of the closed-loop controller in using a reduced-order discrete SCR simulator developed at MTU. Our experiments with the surrogate HDD-FTP data show that the strategy developed in this paper can be used to identify performance bounds for urea dose controllers.
A Simplified Model of ARIS for Optimal Controller Design
NASA Technical Reports Server (NTRS)
Beech, Geoffrey S.; Hampton, R. David; Kross, Denny (Technical Monitor)
2001-01-01
Many space-science experiments require active vibration isolation. Boeing's Active Rack Isolation System (ARIS) isolates experiments at the rack (vs. experiment or sub-experiment) level, with multi e experiments per rack. An ARIS-isolated rack typically employs eight actuators and thirteen umbilicals; the umbilicals provide services such as power, data transmission, and cooling. Hampton, et al., used "Kane's method" to develop an analytical, nonlinear, rigid-body model of ARIS that includes full actuator dynamics (inertias). This model, less the umbilicals, was first implemented for simulation by Beech and Hampton; they developed and tested their model using two commercial-off-the-shelf (COTS) software packages. Rupert, et al., added umbilical-transmitted disturbances to this nonlinear model. Because the nonlinear model, even for the untethered system, is both exceedingly complex and "encapsulated" inside these COTS tools, it is largely inaccessible to ARIS controller designers. This paper shows that ISPR rattle-space constraints and small ARIS actuator masses permit considerable model simplification, without significant loss of fidelity. First, for various loading conditions, comparisons are made between the dynamic responses of the nonlinear model (untethered) and a truth model. Then comparisons are made among nonlinear, linearized, and linearized reduced-mass models. It is concluded that these three models all capture the significant system rigid-body dynamics, with the third being preferred due to its relative simplicity.
Li, Tianwei; Yan, Gang; Wang, Yeyao; Ma, Xiaofan; Nie, Yongfeng
2003-05-01
According to the basic characteristics of municipal solid waste generated from medium or small city of China, the optimal management principles and programs for optimization management model suitable to them were put forward. By application of the model in case study, the optimal scenarios for the disposal of municipal solid waste from the planning system in 1999, 2005 and 2010 were calculated, which adequately validated and accounted for the advantages of optimization model by comparison of costs between optimization scenarios and former scenarios. PMID:12916219
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Computationally efficient calibration of WATCLASS Hydrologic models using surrogate optimization
NASA Astrophysics Data System (ADS)
Kamali, M.; Ponnambalam, K.; Soulis, E. D.
2007-07-01
In this approach, exploration of the cost function space was performed with an inexpensive surrogate function, not the expensive original function. The Design and Analysis of Computer Experiments(DACE) surrogate function, which is one type of approximate models, which takes correlation function for error was employed. The results for Monte Carlo Sampling, Latin Hypercube Sampling and Design and Analysis of Computer Experiments(DACE) approximate model have been compared. The results show that DACE model has a good potential for predicting the trend of simulation results. The case study of this document was WATCLASS hydrologic model calibration on Smokey-River watershed.
Models and optimization of solar-control automotive glasses
NASA Astrophysics Data System (ADS)
Blume, Russell Dale
Efforts to develop automotive glasses with enhanced solar control characteristics have been motivated by the desire for increased consumer comfort, reduced air-conditioning loads, and improved fuel-economy associated with a reduction in the total solar energy transmitted into the automotive interior. In the current investigation, the base soda-lime-silicate glass (72.7 wt.% SiO 2, 14.2% Na2O, 10.0% CaO, 2.5% MgO, 0.6% Al2O 3 with 0.3 Na2SO4 added to the batch as a fining agent) was modified with Fe2O3 (0.0 to 0.8%), NiO (0.0 to 0.15%), CoO (0.0 to 0.15%), V2O5 (0.0 to 0.225%), TiO2 (0.0 to 1.5%), SnO (0.0 to 3.0%), ZnS (0.0 to 0.09%), ZnO (0.0 to 2.0%), CaF2 (0.0 to 2.0%), and P2O5 (0.0 to 2.0%) to exploit reported non-linear mechanistic interactions among the dopants by which the solar-control characteristics of the base glass can be modified. Due to the large number of experimental variables under consideration, a D-optimal experimental design methodology was utilized to model the solar-optical properties as a function of batch composition. The independent variables were defined as the calculated batch concentrations of the primary (Fe2O 3, NiO, CoO, V2O5) and interactive (CaF2 , P2O5, SnO, ZnS, ZnO, TiO2) dopants in the glass. The dependent variable was defined as the apparent optical density over a wavelength range of 300--2700 nm at 10 nm intervals. The model form relating the batch composition to the apparent optical density was a modified Lambert-Beer absorption law; which, in addition to the linear terms, contained quadratic terms of the primary dopants, and a series of binary and ternary non-linear interactions amongst the primary and interactive dopants. Utilizing the developed model, exceptional fit in terms of both the discrete response (the transmission curves) and the integrated response (visible and solar transmittance) were realized. Glasses utilizing Fe2O 3, CoO, NiO, V2O5, ZnO and P2O 5 have generated innovative glasses with substantially improved
The Role of Mathematical Models in Optimizing Instruction.
ERIC Educational Resources Information Center
Calfee, Robert C.
Consideration of computer-assisted instruction in the classroom has led to an analysis of the educational process including the need for developing more adequate models of the learning processes and increased attention to the function of the teacher (human or computer) as a decision maker. Given a descriptive model of the learning process, it is…
Regression Model Optimization for the Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2009-01-01
A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Cyclone optimization based on a new empirical model for pressure drop
Ramachandran, G.; Leith, D. ); Dirgo, J. ); Feldman, H. )
1991-01-01
An empirical model for predicting pressure drop across a cyclone, developed by Dirgo is presented. The model was developed through a statistical analysis of pressure drop data for 98 cyclone designs. The model is shown to perform better than the pressure drop models of Shepherd and Lapple, Alexander, First, Stairmand, and Barth. This model is used with the efficiency model of Iozia and Leith to develop an optimization curve which predicts the minimum pressure drop and the dimension ratios of the optimized cyclone for a given aerodynamic cut diameter, d{sub 50}. The effect of variation in cyclone height, cyclone diameter, and flow on the optimization is determined. The optimization results are used to develop a design procedure for optimized cyclones.
Oneida Tribe of Indians of Wisconsin Energy Optimization Model
Troge, Michael
2014-12-01
Oneida Nation is located in Northeast Wisconsin. The reservation is approximately 96 square miles (8 miles x 12 miles), or 65,000 acres. The greater Green Bay area is east and adjacent to the reservation. A county line roughly splits the reservation in half; the west half is in Outagamie County and the east half is in Brown County. Land use is predominantly agriculture on the west 2/3 and suburban on the east 1/3 of the reservation. Nearly 5,000 tribally enrolled members live in the reservation with a total population of about 21,000. Tribal ownership is scattered across the reservation and is about 23,000 acres. Currently, the Oneida Tribe of Indians of Wisconsin (OTIW) community members and facilities receive the vast majority of electrical and natural gas services from two of the largest investor-owned utilities in the state, WE Energies and Wisconsin Public Service. All urban and suburban buildings have access to natural gas. About 15% of the population and five Tribal facilities are in rural locations and therefore use propane as a primary heating fuel. Wood and oil are also used as primary or supplemental heat sources for a small percent of the population. Very few renewable energy systems, used to generate electricity and heat, have been installed on the Oneida Reservation. This project was an effort to develop a reasonable renewable energy portfolio that will help Oneida to provide a leadership role in developing a clean energy economy. The Energy Optimization Model (EOM) is an exploration of energy opportunities available to the Tribe and it is intended to provide a decision framework to allow the Tribe to make the wisest choices in energy investment with an organizational desire to establish a renewable portfolio standard (RPS).
Optimized dynamical decoupling in a model quantum memory.
Biercuk, Michael J; Uys, Hermann; VanDevender, Aaron P; Shiga, Nobuyasu; Itano, Wayne M; Bollinger, John J
2009-04-23
Any quantum system, such as those used in quantum information or magnetic resonance, is subject to random phase errors that can dramatically affect the fidelity of a desired quantum operation or measurement. In the context of quantum information, quantum error correction techniques have been developed to correct these errors, but resource requirements are extraordinary. The realization of a physically tractable quantum information system will therefore be facilitated if qubit (quantum bit) error rates are far below the so-called fault-tolerance error threshold, predicted to be of the order of 10(-3)-10(-6). The need to realize such low error rates motivates a search for alternative strategies to suppress dephasing in quantum systems. Here we experimentally demonstrate massive suppression of qubit error rates by the application of optimized dynamical decoupling pulse sequences, using a model quantum system capable of simulating a variety of qubit technologies. We demonstrate an analytically derived pulse sequence, UDD, and find novel sequences through active, real-time experimental feedback. The latter sequences are tailored to maximize error suppression without the need for a priori knowledge of the ambient noise environment, and are capable of suppressing errors by orders of magnitude compared to other existing sequences (including the benchmark multi-pulse spin echo). Our work includes the extension of a treatment to predict qubit decoherence under realistic conditions, yielding strong agreement between experimental data and theory for arbitrary pulse sequences incorporating nonidealized control pulses. These results demonstrate the robustness of qubit memory error suppression through dynamical decoupling techniques across a variety of qubit technologies. PMID:19396139
High-throughput generation, optimization and analysis of genome-scale metabolic models.
Henry, C. S.; DeJongh, M.; Best, A. A.; Frybarger, P. M.; Linsay, B.; Stevens, R. L.
2010-09-01
Genome-scale metabolic models have proven to be valuable for predicting organism phenotypes from genotypes. Yet efforts to develop new models are failing to keep pace with genome sequencing. To address this problem, we introduce the Model SEED, a web-based resource for high-throughput generation, optimization and analysis of genome-scale metabolic models. The Model SEED integrates existing methods and introduces techniques to automate nearly every step of this process, taking {approx}48 h to reconstruct a metabolic model from an assembled genome sequence. We apply this resource to generate 130 genome-scale metabolic models representing a taxonomically diverse set of bacteria. Twenty-two of the models were validated against available gene essentiality and Biolog data, with the average model accuracy determined to be 66% before optimization and 87% after optimization.
NASA Astrophysics Data System (ADS)
Schöniger, A.; Nowak, W.; Wöhling, T.
2013-12-01
Bayesian model averaging (BMA) combines the predictive capabilities of alternative conceptual models into a robust best estimate and allows the quantification of conceptual uncertainty. The individual models are weighted with their posterior probability according to Bayes' theorem. Despite this rigorous procedure, we see four obstacles to robust model ranking: (1) The weights inherit uncertainty related to measurement noise in the calibration data set, which may compromise the reliability of model ranking. (2) Posterior weights rank the models only relative to each other, but do not contain information about the absolute model performance. (3) There is a lack of objective methods to assess whether the suggested models are practically distinguishable or very similar to each other, i.e., whether the individual models explore different regions of the model space. (4) No theory for optimal design (OD) of experiments exists that explicitly aims at maximum-confidence model discrimination. The goal of our study is to overcome these four shortcomings. We determine the robustness of weights against measurement noise (1) by repeatedly perturbing the observed data with random measurement errors and analyzing the variability in the obtained weights. Realizing that model weights have a probability distribution of their own, we introduce an additional term into the overall prediction uncertainty analysis scheme which we call 'weighting uncertainty'. We further assess an 'absolute distance' in performance of the model set from the truth (2) as seen through the eyes of the data by interpreting statistics of Bayesian model evidence. This analysis is of great value for modellers to decide, if the modelling task can be satisfactorily carried out with the model(s) at hand, or if more effort should be invested in extending the set with better performing models. As a further prerequisite for robust model selection, we scrutinize the ability of BMA to distinguish between the models in
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
ERIC Educational Resources Information Center
Wu, Jason H.
2013-01-01
This study was designed to examine the construct of academic optimism and its relationship with collective responsibility in a sample of Taiwan elementary schools. The construct of academic optimism was tested using confirmatory factor analysis, and the whole structural model was tested with a structural equation modeling analysis. The data were…
The analysis of optimal singular controls for SEIR model of tuberculosis
NASA Astrophysics Data System (ADS)
Marpaung, Faridawaty; Rangkuti, Yulita M.; Sinaga, Marlina S.
2014-12-01
The optimally of singular control for SEIR model of Tuberculosis is analyzed. There are controls that correspond to time of the vaccination and treatment schedule. The optimally of singular control is obtained by differentiate a switching function of the model. The result shows that vaccination and treatment control are singular.
Optimal bispectrum constraints on single-field models of inflation
Anderson, Gemma J.; Regan, Donough; Seery, David E-mail: D.Regan@sussex.ac.uk
2014-07-01
We use WMAP 9-year bispectrum data to constrain the free parameters of an 'effective field theory' describing fluctuations in single-field inflation. The Lagrangian of the theory contains a finite number of operators associated with unknown mass scales. Each operator produces a fixed bispectrum shape, which we decompose into partial waves in order to construct a likelihood function. Based on this likelihood we are able to constrain four linearly independent combinations of the mass scales. As an example of our framework we specialize our results to the case of 'Dirac-Born-Infeld' and 'ghost' inflation and obtain the posterior probability for each model, which in Bayesian schemes is a useful tool for model comparison. Our results suggest that DBI-like models with two or more free parameters are disfavoured by the data by comparison with single-parameter models in the same class.
Modeling Illicit Drug Use Dynamics and Its Optimal Control Analysis
2015-01-01
The global burden of death and disability attributable to illicit drug use, remains a significant threat to public health for both developed and developing nations. This paper presents a new mathematical modeling framework to investigate the effects of illicit drug use in the community. In our model the transmission process is captured as a social “contact” process between the susceptible individuals and illicit drug users. We conduct both epidemic and endemic analysis, with a focus on the threshold dynamics characterized by the basic reproduction number. Using our model, we present illustrative numerical results with a case study in Cape Town, Gauteng, Mpumalanga and Durban communities of South Africa. In addition, the basic model is extended to incorporate time dependent intervention strategies. PMID:26819625
Kobayashi, Mimako; Carpenter, Tim E; Dickey, Bradley F; Howitt, Richard E
2007-05-16
A dynamic optimization model was used to search for optimal strategies to control foot-and-mouth disease (FMD) in the three-county region in the Central Valley of California. The model minimized total regional epidemic cost by choosing the levels of depopulation of diagnosed herds, preemptive depopulation, and vaccination. Impacts of limited carcass disposal capacity and vaccination were also examined, and the shadow value, the implicit value of each capacity, was estimated. The model found that to control FMD in the region, (1) preemptive depopulation was not optimal, (2) vaccination, if allowed, was optimal, reducing total cost by 3-7%, (3) increased vaccination capacity reduced total cost up to US$119 per dose, (4) increased carcass disposal capacity reduced total cost by US$9000-59,400 per head with and without vaccination, respectively, and (5) dairy operations should be given preferential attention in allocating limited control resources. PMID:17280730
OPTIMIZING MODEL PERFORMANCE: VARIABLE SIZE RESOLUTION IN CLOUD CHEMISTRY MODELING. (R826371C005)
Under many conditions size-resolved aqueous-phase chemistry models predict higher sulfate production rates than comparable bulk aqueous-phase models. However, there are special circumstances under which bulk and size-resolved models offer similar predictions. These special con...
Process Cost Modeling for Multi-Disciplinary Design Optimization
NASA Technical Reports Server (NTRS)
Bao, Han P.; Freeman, William (Technical Monitor)
2002-01-01
For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to
NASA Astrophysics Data System (ADS)
Yi, G. L.; Sui, Y. K.
2015-10-01
The objective and constraint functions related to structural optimization designs are classified into economic and performance indexes in this paper. The influences of their different roles in model construction of structural topology optimization are also discussed. Furthermore, two structural topology optimization models, optimizing a performance index under the limitation of an economic index, represented by the minimum compliance with a volume constraint (MCVC) model, and optimizing an economic index under the limitation of a performance index, represented by the minimum weight with a displacement constraint (MWDC) model, are presented. Based on a comparison of numerical example results, the conclusions can be summarized as follows: (1) under the same external loading and displacement performance conditions, the results of the MWDC model are almost equal to those of the MCVC model; (2) the MWDC model overcomes the difficulties and shortcomings of the MCVC model; this makes the MWDC model more feasible in model construction; (3) constructing a model of minimizing an economic index under the limitations of performance indexes is better at meeting the needs of practical engineering problems and completely satisfies safety and economic requirements in mechanical engineering, which have remained unchanged since the early days of mechanical engineering.
Hartmann, András; Lemos, João M; Vinga, Susana
2015-08-01
The aim of inverse modeling is to capture the systems׳ dynamics through a set of parameterized Ordinary Differential Equations (ODEs). Parameters are often required to fit multiple repeated measurements or different experimental conditions. This typically leads to a multi-objective optimization problem that can be formulated as a non-convex optimization problem. Modeling of glucose utilization of Lactococcus lactis bacteria is considered using in vivo Nuclear Magnetic Resonance (NMR) measurements in perturbation experiments. We propose an ODE model based on a modified time-varying exponential decay that is flexible enough to model several different experimental conditions. The starting point is an over-parameterized non-linear model that will be further simplified through an optimization procedure with regularization penalties. For the parameter estimation, a stochastic global optimization method, particle swarm optimization (PSO) is used. A regularization is introduced to the identification, imposing that parameters should be the same across several experiments in order to identify a general model. On the remaining parameter that varies across the experiments a function is fit in order to be able to predict new experiments for any initial condition. The method is cross-validated by fitting the model to two experiments and validating the third one. Finally, the proposed model is integrated with existing models of glycolysis in order to reconstruct the remaining metabolites. The method was found useful as a general procedure to reduce the number of parameters of unidentifiable and over-parameterized models, thus supporting feature selection methods for parametric models. PMID:25248561
Optimizing modelling in iterative image reconstruction for preclinical pinhole PET
NASA Astrophysics Data System (ADS)
Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.
2016-05-01
The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.
Optimizing modelling in iterative image reconstruction for preclinical pinhole PET.
Goorden, Marlies C; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J
2016-05-21
The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning (99m)Tc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes ('multiple-pinhole paths' (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging (18)F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport. PMID:27082049
On the model-based optimization of secreting mammalian cell (GS-NS0) cultures.
Kiparissides, A; Pistikopoulos, E N; Mantalaris, A
2015-03-01
The global bio-manufacturing industry requires improved process efficiency to satisfy the increasing demands for biochemicals, biofuels, and biologics. The use of model-based techniques can facilitate the reduction of unnecessary experimentation and reduce labor and operating costs by identifying the most informative experiments and providing strategies to optimize the bioprocess at hand. Herein, we investigate the potential of a research methodology that combines model development, parameter estimation, global sensitivity analysis, and selection of optimal feeding policies via dynamic optimization methods to improve the efficiency of an industrially relevant bioprocess. Data from a set of batch experiments was used to estimate values for the parameters of an unstructured model describing monoclonal antibody (mAb) production in GS-NS0 cell cultures. Global Sensitivity Analysis (GSA) highlighted parameters with a strong effect on the model output and data from a fed-batch experiment were used to refine their estimated values. Model-based optimization was used to identify a feeding regime that maximized final mAb titer. An independent fed-batch experiment was conducted to validate both the results of the optimization and the predictive capabilities of the developed model. The successful integration of wet-lab experimentation and mathematical model development, analysis, and optimization represents a unique, novel, and interdisciplinary approach that addresses the complicated research and industrial problem of model-based optimization of cell based processes. PMID:25219609
Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.
Patri, Jean-François; Diard, Julien; Perrier, Pascal
2015-12-01
The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way. PMID:26497359
Optimized continuous pharmaceutical manufacturing via model-predictive control.
Rehrl, Jakob; Kruisz, Julia; Sacher, Stephan; Khinast, Johannes; Horn, Martin
2016-08-20
This paper demonstrates the application of model-predictive control to a feeding blending unit used in continuous pharmaceutical manufacturing. The goal of this contribution is, on the one hand, to highlight the advantages of the proposed concept compared to conventional PI-controllers, and, on the other hand, to present a step-by-step guide for controller synthesis. The derivation of the required mathematical plant model is given in detail and all the steps required to develop a model-predictive controller are shown. Compared to conventional concepts, the proposed approach allows to conveniently consider constraints (e.g. mass hold-up in the blender) and offers a straightforward, easy to tune controller setup. The concept is implemented in a simulation environment. In order to realize it on a real system, additional aspects (e.g., state estimation, measurement equipment) will have to be investigated. PMID:27317987
Reproducing Phenomenology of Peroxidation Kinetics via Model Optimization
NASA Astrophysics Data System (ADS)
Ruslanov, Anatole D.; Bashylau, Anton V.
2010-06-01
We studied mathematical modeling of lipid peroxidation using a biochemical model system of iron (II)-ascorbate-dependent lipid peroxidation of rat hepatocyte mitochondrial fractions. We found that antioxidants extracted from plants demonstrate a high intensity of peroxidation inhibition. We simplified the system of differential equations that describes the kinetics of the mathematical model to a first order equation, which can be solved analytically. Moreover, we endeavor to algorithmically and heuristically recreate the processes and construct an environment that closely resembles the corresponding natural system. Our results demonstrate that it is possible to theoretically predict both the kinetics of oxidation and the intensity of inhibition without resorting to analytical and biochemical research, which is important for cost-effective discovery and development of medical agents with antioxidant action from the medicinal plants.
Grain size modeling and optimization of rotary forged Alloy 718
Domblesky, J.P.; Shivpuri, R.
1997-04-01
The study presented describes the simulation procedure and methodology used to develop two models for predicting recrystallized grain size in Alloy 718 billet. To simulate multiple pass forging of billet, controlled, high temperature compression testing was used to apply alternate deformation and dwell cycles to Alloy 718 specimens. Grain size obtained by simulation was found to be in excellent agreement with grain size from forged billet when cooling rate was included. The study also revealed that strain per pass and forging temperature were the predominant factors in controlling the recrystallized grain size. Both models were found to accurately predict the recrystallized grain size obtained by compression tests performed at super-solvus temperatures.
Optimal Estimation of Phenological Crop Model Parameters for Rice (Oryza sativa)
NASA Astrophysics Data System (ADS)
Sharifi, H.; Hijmans, R. J.; Espe, M.; Hill, J. E.; Linquist, B.
2015-12-01
Crop phenology models are important components of crop growth models. In the case of phenology models, generally only a few parameters are calibrated and default cardinal temperatures are used which can lead to a temperature-dependent systematic phenology prediction error. Our objective was to evaluate different optimization approaches in the Oryza2000 and CERES-Rice phenology sub-models to assess the importance of optimizing cardinal temperatures on model performance and systematic error. We used two optimization approaches: the typical single-stage (planting to heading) and three-stage model optimization (for planting to panicle initiation (PI), PI to heading (HD), and HD to physiological maturity (MT)) to simultaneously optimize all model parameters. Data for this study was collected over three years and six locations on seven California rice cultivars. A temperature-dependent systematic error was found for all cultivars and stages, however it was generally small (systematic error < 2.2). Both optimization approaches in both models resulted in only small changes in cardinal temperature relative to the default values and thus optimization of cardinal temperatures did not affect systematic error or model performance. Compared to single stage optimization, three-stage optimization had little effect on determining time to PI or HD but significantly improved the precision in determining the time from HD to MT: the RMSE reduced from an average of 6 to 3.3 in Oryza2000 and from 6.6 to 3.8 in CERES-Rice. With regards to systematic error, we found a trade-off between RMSE and systematic error when optimization objective set to minimize RMSE or systematic error. Therefore, it is important to find the limits within which the trade-offs between RMSE and systematic error are acceptable, especially in climate change studies where this can prevent erroneous conclusions.
A Concept Model for Optimizing Contact Time in Geography Teacher Training Programs
ERIC Educational Resources Information Center
Golightly, A.; Nieuwoudt, H. D.; Richter, B. W.
2006-01-01
This article describes the results of a study of a concept model for optimizing contact time between the Geography educator and the student teacher within a learner-centered teaching paradigm. A quasi-experimental time series approach was used in order to change aspects of the model as the research progressed. Changes to the model were made on the…
The Search for "Optimal" Cutoff Properties: Fit Index Criteria in Structural Equation Modeling
ERIC Educational Resources Information Center
Sivo, Stephen A.; Xitao, Fan; Witta, E. Lea; Willse, John T.
2006-01-01
This study is a partial replication of L. Hu and P. M. Bentler's (1999) fit criteria work. The purpose of this study was twofold: (a) to determine whether cut-off values vary according to which model is the true population model for a dataset and (b) to identify which of 13 fit indexes behave optimally by retaining all of the correct models while…
Building Restoration Operations Optimization Model Beta Version 1.0
2007-05-31
The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOMs integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are critical to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated laser
Building Restoration Operations Optimization Model Beta Version 1.0
Energy Science and Technology Software Center (ESTSC)
2007-05-31
The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOMs integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are criticalmore » to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated
Optimizing the Informal Curriculum: One Counselor Education Program Model
ERIC Educational Resources Information Center
Myers, Jane E.; Borders, L. DiAnne; Kress, Victoria E.; Shoffner, Marie
2005-01-01
An annual project involving students and faculty in a collaborative, 6-month planning process that culminates in a half-day program with both didactic and experiential components is presented as a model for creating powerful learning experiences external to the classroom. This article examines the CACREP (The 2001 Standards of the Council for…
Optimizing technology investments: a broad mission model approach
NASA Technical Reports Server (NTRS)
Shishko, R.
2003-01-01
A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.
3D head model classification using optimized EGI
NASA Astrophysics Data System (ADS)
Tong, Xin; Wong, Hau-san; Ma, Bo
2006-02-01
With the general availability of 3D digitizers and scanners, 3D graphical models have been used widely in a variety of applications. This has led to the development of search engines for 3D models. Especially, 3D head model classification and retrieval have received more and more attention in view of their many potential applications in criminal identifications, computer animation, movie industry and medical industry. This paper addresses the 3D head model classification problem using 2D subspace analysis methods such as 2D principal component analysis (2D PCA[3]) and 2D fisher discriminant analysis (2DLDA[5]). It takes advantage of the fact that the histogram is a 2D image, and we can extract the most useful information from these 2D images to get a good result accordingingly. As a result, there are two main advantages: First, we can perform less calculation to obtain the same rate of classification; second, we can reduce the dimensionality more than PCA to obtain a higher efficiency.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST
NASA Astrophysics Data System (ADS)
Reimer, J.; Schürch, M.; Slawig, T.
2014-09-01
The weighted least squares estimator for model parameters was presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs was described together with a lesser known approach which takes into account a potential nonlinearity of the model parameters. These two approaches were combined with two different methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and handling was described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two models for sediment concentration in seawater of different complexity served as application example. The advantages and disadvantages of the different approaches were compared, and an evaluation of the approaches was performed.
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V.
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139
Comparison of optimization methods for the hyperspectral semi-analytical model
NASA Astrophysics Data System (ADS)
Du, KePing; Xi, Ying; Sun, LiRan; Zhang, Xuegang
2009-01-01
During recent years, more and more efforts have been focused on developing new models based on ocean optics theory to retrieve water's bio-geo-chemical parameters or inherent optical properties (IOPs) from either ocean color imagery or in situ measurements. Basically, these models are sophisticated, and hard to invert directly, look up table (LUT) technique or optimization methods are employed to retrieve the unknown parameters, e.g., chlorophyll concentration, CDOM absorption, etc. Many researches prefer to use time-consuming global optimization methods, e.g., genetic or evolutionary algorithm, etc. In this study, different optimization methods, smooth nonlinear optimization (NLP), global optimization (GO), nonsmooth optimization (NSP), are compared based on the sophisticated hyper-spectral semianalytical (SA) algorithm developed by Lee et al., retrieval accuracy and performance are evaluated. It is found that retrieval accuracy don't have much difference, the performance difference, however, is much larger, NLP works very well for the SA model. For a given model, it is better to analyze the model is linear, nonlinear or nonsmooth category problem, sometimes, convex also need to be determined, or linearize some nonsmooth problem caused by if decision, then select the corresponding category optimization methods. Initial values selection is a big issue for optimization, the simple statistical models (e.g., OC2 or OC4) are used to retrieve the unknowns as initial values.
NASA Astrophysics Data System (ADS)
Zakynthinaki, M. S.; Stirling, J. R.
2007-01-01
Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.
Yang, Qidong; Zuo, Hongchao; Li, Weidong
2016-01-01
Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786
Yang, Qidong; Zuo, Hongchao; Li, Weidong
2016-01-01
Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786
A method for generating numerical pilot opinion ratings using the optimal pilot model
NASA Technical Reports Server (NTRS)
Hess, R. A.
1976-01-01
A method for generating numerical pilot opinion ratings using the optimal pilot model is introduced. The method is contained in a rating hypothesis which states that the numerical rating which a human pilot assigns to a specific vehicle and task can be directly related to the numerical value of the index of performance resulting from the optimal pilot modeling procedure as applied to that vehicle and task. The hypothesis is tested using the data from four piloted simulations. The results indicate that the hypothesis is reasonable, but that the predictive capability of the method is a strong function of the accuracy of the pilot model itself. This accuracy is, in turn, dependent upon the parameters which define the optimal modeling problem. A procedure for specifying the parameters for the optimal pilot model in the absence of experimental data is suggested.
Dynamic multiobjective optimization algorithm based on average distance linear prediction model.
Li, Zhiyong; Chen, Hengyong; Xie, Zhaoxin; Chen, Chao; Sallam, Ahmed
2014-01-01
Many real-world optimization problems involve objectives, constraints, and parameters which constantly change with time. Optimization in a changing environment is a challenging task, especially when multiple objectives are required to be optimized simultaneously. Nowadays the common way to solve dynamic multiobjective optimization problems (DMOPs) is to utilize history information to guide future search, but there is no common successful method to solve different DMOPs. In this paper, we define a kind of dynamic multiobjectives problem with translational Paretooptimal set (DMOP-TPS) and propose a new prediction model named ADLM for solving DMOP-TPS. We have tested and compared the proposed prediction model (ADLM) with three traditional prediction models on several classic DMOP-TPS test problems. The simulation results show that our proposed prediction model outperforms other prediction models for DMOP-TPS. PMID:24616625
NASA Astrophysics Data System (ADS)
Deng, Lujuan; Xie, Songhe; Cui, Jiantao; Liu, Tao
2006-11-01
It is the essential goal of intelligent greenhouse environment optimal control to enhance income of cropper and energy save. There were some characteristics such as uncertainty, imprecision, nonlinear, strong coupling, bigger inertia and different time scale in greenhouse environment control system. So greenhouse environment optimal control was not easy and especially model-based optimal control method was more difficult. So the optimal control problem of plant environment in intelligent greenhouse was researched. Hierarchical greenhouse environment control system was constructed. In the first level data measuring was carried out and executive machine was controlled. Optimal setting points of climate controlled variable in greenhouse was calculated and chosen in the second level. Market analysis and planning were completed in third level. The problem of the optimal setting point was discussed in this paper. Firstly the model of plant canopy photosynthesis responses and the model of greenhouse climate model were constructed. Afterwards according to experience of the planting expert, in daytime the optimal goals were decided according to the most maximal photosynthesis rate principle. In nighttime on plant better growth conditions the optimal goals were decided by energy saving principle. Whereafter environment optimal control setting points were computed by GA. Compared the optimal result and recording data in real system, the method is reasonable and can achieve energy saving and the maximal photosynthesis rate in intelligent greenhouse
Using models for the optimization of hydrologic monitoring
Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.
2011-01-01
Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The
Tholudur, A.; Ramirez, W.F.; McMillan, J.D.
1999-07-01
The enzyme cellulase, a multienzyme complex made up of several proteins, catalyzes the conversion of cellulose to glucose in an enzymatic hydrolysis-based biomass-to-ethanol process. Production of cellulase enzyme proteins in large quantities using the fungus Trichoderma reesei requires understanding the dynamics of growth and enzyme production. The method of neural network parameter function modeling, which combines the approximation capabilities of neural networks with fundamental process knowledge, is utilized to develop a mathematical model of this dynamic system. In addition, kinetic models are also developed. Laboratory data from bench-scale fermentations involving growth and protein production by T. reesei on lactose and xylose are used to estimate the parameters in these models. The relative performance of the various models and the results of optimizing these models on two different performance measures are presented. An approximately 33% lower root-mean-squared error (RMSE) in protein predictions and about 40% lower total RMSE is obtained with the neural network-based model, the RMSE in predicting optimal conditions for two performance indices, is about 67% and 40% lower, respectively, when compared with the kinetic models. Thus, both model predictions and optimization results from the neural network-based model are found to be closer to the experimental data than the kinetic models developed in this work. It is shown that the neural network parameter function modeling method can be useful as a macromodeling technique to rapidly develop dynamic models of a process.
Manual of phosphoric acid fuel cell power plant optimization model and computer program
NASA Technical Reports Server (NTRS)
Lu, C. Y.; Alkasab, K. A.
1984-01-01
An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.
Global stability and optimal control of an SIRS epidemic model on heterogeneous networks
NASA Astrophysics Data System (ADS)
Chen, Lijuan; Sun, Jitao
2014-09-01
In this paper, we consider an SIRS epidemic model with vaccination on heterogeneous networks. By constructing suitable Lyapunov functions, global stability of the disease-free equilibrium and the endemic equilibrium of the model is investigated. Also we firstly study an optimally controlled SIRS epidemic model on complex networks. We show that an optimal control exists for the control problem. Finally some examples are presented to show the global stability and the efficiency of this optimal control. These results can help in adopting pragmatic treatment upon diseases in structured populations.
The application of temporal difference learning in optimal diet models.
Teichmann, Jan; Broom, Mark; Alonso, Eduardo
2014-01-01
An experience-based aversive learning model of foraging behaviour in uncertain environments is presented. We use Q-learning as a model-free implementation of Temporal difference learning motivated by growing evidence for neural correlates in natural reinforcement settings. The predator has the choice of including an aposematic prey in its diet or to forage on alternative food sources. We show how the predator's foraging behaviour and energy intake depend on toxicity of the defended prey and the presence of Batesian mimics. We introduce the precondition of exploration of the action space for successful aversion formation and show how it predicts foraging behaviour in the presence of conflicting rewards which is conditionally suboptimal in a fixed environment but allows better adaptation in changing environments. PMID:24036204
Model for optimal parallax in stereo radar imagery
NASA Technical Reports Server (NTRS)
Pisaruck, M. A.; Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1984-01-01
Simulated stereo radar imagery is used to investigate parameters for a spaceborne imaging radar. Incidence angles ranging from small to intermediate to large are used with three digital terrain model areas which are representative of relatively flat, moderately rough, and mountaneous terrain. The simulated radar imagery was evaluated by interpreters for ease of stereo perception and information content, and rank ordered within each class of terrain. The interpreter's results are analyzed for trends between the height of a feature and either parallax or vertical exaggeration for a stereo pair. A model is developed which predicts the amount of parallax (or vertical exaggeration) an interpreter would desire for best stereo perception of a feature of a specific height. Results indicate the selection of angle of incidence and stereo intersection angle depend upon the relief of the terrain. Examples of the simulated stereo imagery are presented for a candidate spaceborne imaging radar having four selectable angles of incidence.
Ayvaz, M Tamer
2010-09-20
This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. PMID:20633952
Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan
NASA Astrophysics Data System (ADS)
Huang, P. S.; Chiu, Y.
2015-12-01
In 1970's, the agriculture and aquaculture were rapidly developed at Pingtung coastal area in southern Taiwan. The groundwater aquifers were over-pumped and caused the seawater intrusion. In order to remedy the contaminated groundwater and find the best strategies of groundwater usage, a management model to search the optimal groundwater operational strategies is developed in this study. The objective function is to minimize the total amount of injection water and a set of constraints are applied to ensure the groundwater levels and concentrations are satisfied. A three-dimension density-dependent flow and transport simulation model, called SEAWAT developed by U.S. Geological Survey, is selected to simulate the phenomenon of seawater intrusion. The simulation model is well calibrated by the field measurements and replaced by the surrogate model of trained artificial neural networks (ANNs) to reduce the computational time. The ANNs are embedded in the management model to link the simulation and optimization models, and the global optimizer of differential evolution (DE) is applied for solving the management model. The optimal results show that the fully trained ANNs could substitute the original simulation model and reduce much computational time. Under appropriate setting of objective function and constraints, DE can find the optimal injection rates at predefined barriers. The concentrations at the target locations could decrease more than 50 percent within the planning horizon of 20 years. Keywords : Seawater intrusion, groundwater management, numerical model, artificial neural networks, differential evolution
Vertical slot fishways: Mathematical modeling and optimal management
NASA Astrophysics Data System (ADS)
Alvarez-Vázquez, L. J.; Martínez, A.; Vázquez-Méndez, M. E.; Vilar, M. A.
2008-09-01
Fishways are the main type of hydraulic devices currently used to facilitate migration of fish past obstructions (dams, waterfalls, rapids,...) in rivers. In this paper we present a mathematical formulation of an optimal control problem related to the optimal management of a vertical slot fishway, where the state system is given by the shallow water equations, the control is the flux of inflow water, and the cost function reflects the need of rest areas for fish and of a water velocity suitable for fish leaping and swimming capabilities. We give a first-order optimality condition for characterizing the optimal solutions of this problem. From a numerical point of view, we use a characteristic-Galerkin method for solving the shallow water equations, and we use an optimization algorithm for the computation of the optimal control. Finally, we present numerical results obtained for the realistic case of a standard nine pools fishway.
Study on modeling of multispectral emissivity and optimization algorithm.
Yang, Chunling; Yu, Yong; Zhao, Dongyang; Zhao, Guoliang
2006-01-01
Target's spectral emissivity changes variously, and how to obtain target's continuous spectral emissivity is a difficult problem to be well solved nowadays. In this letter, an activation-function-tunable neural network is established, and a multistep searching method which can be used to train the model is proposed. The proposed method can effectively calculate the object's continuous spectral emissivity from the multispectral radiation information. It is a universal method, which can be used to realize on-line emissivity demarcation. PMID:16526491
Bottom friction optimization for a better barotropic tide modelling
NASA Astrophysics Data System (ADS)
Boutet, Martial; Lathuilière, Cyril; Son Hoang, Hong; Baraille, Rémy
2015-04-01
At a regional scale, barotropic tides are the dominant source of variability of currents and water heights. A precise representation of these processes is essential because of their great impacts on human activities (submersion risks, marine renewable energies, ...). Identified sources of error for tide modelling at a regional scale are the followings: bathymetry, boundary forcing and dissipation due to bottom friction. Nevertheless, bathymetric databases are nowadays known with a good accuracy, especially over shelves, and global tide models performances are better than ever. The most promising improvement is thus the bottom friction representation. The method used to estimate bottom friction is the simultaneous perturbation stochastic approximation (SPSA) which consists in the approximation of the gradient based on a fixed number of cost function measurements, regardless of the dimension of the vector to be estimated. Indeed, each cost function measurement is obtained by randomly perturbing every component of the parameter vector. An important feature of SPSA is its relative ease of implementation. In particular, the method does not require the development of tangent linear and adjoint version of the circulation model. Experiments are carried out to estimate bottom friction with the HYbrid Coordinate Ocean Model (HYCOM) in barotropic mode (one isopycnal layer). The study area is the Northeastern Atlantic margin which is characterized by strong currents and an intense dissipation. Bottom friction is parameterized with a quadratic term and friction coefficient is computed with the water height and the bottom roughness. The latter parameter is the one to be estimated. Assimilated data are the available tide gauge observations. First, the bottom roughness is estimated taking into account bottom sediment natures and bathymetric ranges. Then, it is estimated with geographical degrees of freedom. Finally, the impact of the estimation of a mixed quadratic/linear friction
Development and optimization of a nonlinear multiparameter model for the human operator
NASA Technical Reports Server (NTRS)
Johannsen, G.
1972-01-01
A systematic method is proposed for the development, optimization, and comparison of controller-models for the human operator. This is suitable for any designed model, even multiparameter systems. A random search technique is chosen for the parameter optimization. As valuation criteria for the quality of the model development the criterion function - the comparison between the input and output functions of the human operator and those of the model - and the most important characteristic values and functions of the statistical signal theory are used. A nonlinear multiparameter model for the human operator is being designed which considers the complex input information rate per time in a single display. The nonlinear features of the model are effected by a modified threshold element and a decision algorithm. Different display-configurations as well as various transfer functions of the controlled element are explained by different optimized parameter-combinations.
Fuzzy linear model for production optimization of mining systems with multiple entities
NASA Astrophysics Data System (ADS)
Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar
2011-12-01
Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.
Kwok, T; Smith, K A
2000-09-01
The aim of this paper is to study both the theoretical and experimental properties of chaotic neural network (CNN) models for solving combinatorial optimization problems. Previously we have proposed a unifying framework which encompasses the three main model types, namely, Chen and Aihara's chaotic simulated annealing (CSA) with decaying self-coupling, Wang and Smith's CSA with decaying timestep, and the Hopfield network with chaotic noise. Each of these models can be represented as a special case under the framework for certain conditions. This paper combines the framework with experimental results to provide new insights into the effect of the chaotic neurodynamics of each model. By solving the N-queen problem of various sizes with computer simulations, the CNN models are compared in different parameter spaces, with optimization performance measured in terms of feasibility, efficiency, robustness and scalability. Furthermore, characteristic chaotic neurodynamics crucial to effective optimization are identified, together with a guide to choosing the corresponding model parameters. PMID:11152205
CVXPY: A Python-Embedded Modeling Language for Convex Optimization
Diamond, Steven; Boyd, Stephen
2016-01-01
CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples. PMID:27375369
Engineering models for merging wakes in wind farm optimization applications
NASA Astrophysics Data System (ADS)
Machefaux, E.; Larsen, G. C.; Murcia Leon, J. P.
2015-06-01
The present paper deals with validation of 4 different engineering wake superposition approaches against detailed CFD simulations and covering different turbine interspacing, ambient turbulence intensities and mean wind speeds. The first engineering model is a simple linear superposition of wake deficits as applied in e.g. Fuga. The second approach is the square root of sums of squares approach, which is applied in the widely used PARK program. The third approach, which is presently used with the Dynamic Wake Meandering (DWM) model, assumes that the wake affected downstream flow field to be determined by a superposition of the ambient flow field and the dominating wake among contributions from all upstream turbines at any spatial position and at any time. The last approach developed by G.C. Larsen is a newly developed model based on a parabolic type of approach, which combines wake deficits successively. The study indicates that wake interaction depends strongly on the relative wake deficit magnitude, i.e. the deficit magnitude normalized with respect to the ambient mean wind speed, and that the dominant wake assumption within the DWM framework is the most accurate.
Ocampo, Cesar
2004-05-01
The modeling, design, and optimization of finite burn maneuvers for a generalized trajectory design and optimization system is presented. A generalized trajectory design and optimization system is a system that uses a single unified framework that facilitates the modeling and optimization of complex spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The modeling and optimization issues associated with the use of controlled engine burn maneuvers of finite thrust magnitude and duration are presented in the context of designing and optimizing a wide class of finite thrust trajectories. Optimal control theory is used examine the optimization of these maneuvers in arbitrary force fields that are generally position, velocity, mass, and are time dependent. The associated numerical methods used to obtain these solutions involve either, the solution to a system of nonlinear equations, an explicit parameter optimization method, or a hybrid parameter optimization that combines certain aspects of both. The theoretical and numerical methods presented here have been implemented in copernicus, a prototype trajectory design and optimization system under development at the University of Texas at Austin. PMID:15220149
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Continuously Optimized Reliable Energy (CORE) Microgrid: Models & Tools (Fact Sheet)
Not Available
2013-07-01
This brochure describes Continuously Optimized Reliable Energy (CORE), a trademarked process NREL employs to produce conceptual microgrid designs. This systems-based process enables designs to be optimized for economic value, energy surety, and sustainability. Capabilities NREL offers in support of microgrid design are explained.
Using a Model to Compute the Optimal Schedule of Practice
ERIC Educational Resources Information Center
Pavlik, Philip I.; Anderson, John R.
2008-01-01
By balancing the spacing effect against the effects of recency and frequency, this paper explains how practice may be scheduled to maximize learning and retention. In an experiment, an optimized condition using an algorithm determined with this method was compared with other conditions. The optimized condition showed significant benefits with…
Review: Simulation-optimization models for the management and monitoring of coastal aquifers
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Datta, Bithin
2015-09-01
The literature on the application of simulation-optimization approaches for management and monitoring of coastal aquifers is reviewed. Both sharp- and dispersive-interface modeling approaches have been applied in conjunction with optimization algorithms in the past to develop management solutions for saltwater intrusion. Simulation-optimization models based on sharp-interface approximation are often based on the Ghyben-Herzberg relationship and provide an efficient framework for preliminary designs of saltwater-intrusion management schemes. Models based on dispersive-interface numerical models have wider applicability but are challenged by the computational burden involved when applied in the simulation-optimization framework. The use of surrogate models to substitute the physically based model during optimization has been found to be successful in many cases. Scalability is still a challenge for the surrogate modeling approach as the computational advantage accrued is traded-off with the training time required for the surrogate models as the problem size increases. Few studies have attempted to solve stochastic coastal-aquifer management problems considering model prediction uncertainty. Approaches that have been reported in the wider groundwater management literature need to be extended and adapted to address the challenges posed by the stochastic coastal-aquifer management problem. Similarly, while abundant literature is available on simulation-optimization methods for the optimal design of groundwater monitoring networks, applications targeting coastal aquifer systems are rare. Methods to optimize compliance monitoring strategies for coastal aquifers need to be developed considering the importance of monitoring feedback information in improving the management strategies.
Simulation-optimization framework for multi-season hybrid stochastic models
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K. P.
2011-07-01
SummaryA novel simulation-optimization framework is proposed that enables the automation of the hybrid stochastic modeling process for synthetic generation of multi-season streamflows. This framework aims to minimize the drudgery, judgment and subjectivity involved in the selection of the most appropriate hybrid stochastic model. It consists of a multi-objective optimization model as the driver and the hybrid multi-season stochastic streamflow generation model, hybrid matched block boostrap (HMABB) as the simulation engine. For the estimation of the hybrid model parameters, the proposed framework employs objective functions that aim to minimize the overall errors in the preservation of storage capacities at various demand levels, unlike the traditional approaches that are simulation based. Moreover this framework yields a number of competent hybrid stochastic models in a single run of the simulation-optimization framework. The efficacy of the proposed simulation-optimization framework is brought out through application to two monthly streamflow data sets from USA of varying sample sizes that exhibit multi-modality and a complex dependence structure. The results show that the hybrid models obtained from the proposed framework are able to preserve the statistical characteristics as well as the storage characteristics better than the simulation based HMABB model, while minimizing the manual effort and the subjectivity involved in the modeling process. The proposed framework can be easily extended to model multi-site multi-season streamflow data.
NASA Astrophysics Data System (ADS)
Shoemaker, Christine; Espinet, Antoine; Pang, Min
2015-04-01
Models of complex environmental systems can be computationally expensive in order to describe the dynamic interactions of the many components over a sizeable time period. Diagnostics of these systems can include forward simulations of calibrated models under uncertainty and analysis of alternatives of systems management. This discussion will focus on applications of new surrogate optimization and uncertainty analysis methods to environmental models that can enhance our ability to extract information and understanding. For complex models, optimization and especially uncertainty analysis can require a large number of model simulations, which is not feasible for computationally expensive models. Surrogate response surfaces can be used in Global Optimization and Uncertainty methods to obtain accurate answers with far fewer model evaluations, which made the methods practical for computationally expensive models for which conventional methods are not feasible. In this paper we will discuss the application of the SOARS surrogate method for estimating Bayesian posterior density functions for model parameters for a TOUGH2 model of geologic carbon sequestration. We will also briefly discuss new parallel surrogate global optimization algorithm applied to two groundwater remediation sites that was implemented on a supercomputer with up to 64 processors. The applications will illustrate the use of these methods to predict the impact of monitoring and management on subsurface contaminants.
Vrugt, Jasper A; Wohling, Thomas
2008-01-01
Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.
Optimization of global model composed of radial basis functions using the term-ranking approach
Cai, Peng; Tao, Chao Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
Demonstration of structural optimization applied to wind-tunnel model design
NASA Astrophysics Data System (ADS)
French, Mark; Kolonay, Raymond M.
1992-10-01
Results are presented which indicate that using structural optimization to design wind-tunnel models can result in a procedure that matches design stiffnesses well enough to be very useful in sizing the structures of aeroelastic models. The design procedure that is presented demonstrates that optimization can be useful in the design of aeroelastically scaled wind-tunnel models. The resulting structure effectively models an aeroelastically tailored composite wing with a simple aluminum beam structure, a structure that should be inexpensive to manufacture compared with a composite one.
Modeling of Euclidean braided fiber architectures to optimize composite properties
NASA Technical Reports Server (NTRS)
Armstrong-Carroll, E.; Pastore, C.; Ko, F. K.
1992-01-01
Three-dimensional braided fiber reinforcements are a very effective toughening mechanism for composite materials. The integral yarn path inherent to this fiber architecture allows for effective multidirectional dispersion of strain energy and negates delamination problems. In this paper a geometric model of Euclidean braid fiber architectures is presented. This information is used to determine the degree of geometric isotropy in the braids. This information, when combined with candidate material properties, can be used to quickly generate an estimate of the available load-carrying capacity of Euclidean braids at any arbitrary angle.
International Symposium on Technology Management: Modeling, Simulation, and Optimization
NASA Astrophysics Data System (ADS)
Li, Yiming
2007-12-01
This symposium provides a forum for scientists and researchers from academia and industry to exchange knowledge, ideas and results in computational aspects of social and management science. This symposium will cover theory and practice of computational methods, models and empirical analysis for decision making and forecasting in economics, finance, management, transportation, and related aspects of information and system engineering. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!
Liu, Liqiang; Dai, Yuntao; Gao, Jinyu
2014-01-01
Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm. PMID:24955402
Optimal Vaccination of an Endemic Model with Variable Infectivity and Infinite Delay
NASA Astrophysics Data System (ADS)
Zaman, Gul; Saito, Yasuhisa; Khan, Madad
2013-11-01
In this work, we consider a nonlinear SEIR (susceptible, exposed, infectious, and removed) endemic model, which describes the dynamics of the interaction between susceptible and infected individuals in a population. The model represents the disease evolution through a system of nonlinear differential equations with variable infectivity which determines that the infectivity of an infected individual may not be constant during the time after infection. To control the spread of infection and to find a vaccination schedule for an endemic situation, we use optimal control strategies which reduce the susceptible, exposed, and infected individuals and increase the total number of recovered individuals. In order to do this, we introduce the optimal control problem with a suitable control function using an objective functional. We first show the existence of an optimal control for the control problem and then derive the optimality system. Finally the numerical simulations of the model is identified to fit realistic measurement which shows the effectiveness of the model.
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. PMID:26295443
An optimal spacecraft scheduling model for the NASA deep space network
NASA Technical Reports Server (NTRS)
Webb, W. A.
1985-01-01
A computer model is described which uses mixed-integer linear programming to provide optimal DSN spacecraft schedules given a mission set and specified scheduling requirements. A solution technique is proposed which uses Bender's method and a heuristic starting algorithm.
Identification-free adaptive optimal control based on switching predictive models
NASA Astrophysics Data System (ADS)
Luo, Wenguang; Pan, Shenghui; Ma, Zhaomin; Lan, Hongli
2008-10-01
An identification-free adaptive optimal control based on switching predictive models is proposed for the systems with big inertia, long time delay and multi models. Multi predictive models are set in the identification-free adaptive predictive control, and switched according to the optimal switching instants in control of the switching law along with the system running situations in real time. The switching law is designed based on the most important character parameter of the systems, and the optimal switching instants are computed out with the optimal theory for switched systems. The simulation test results show the proposed method is suitable to the systems, such as superheated steam temperature systems of electric power plants, can provide excellent control performance, improve rejecting disturbance ability and self-adaptability, and has lower demand on the predictive model precision.
Optimizing light availability for forages in silvopastoral systems: Modeled results
Technology Transfer Automated Retrieval System (TEKTRAN)
Silvopastoral management optimizes the biophysical interactions between pasture species, trees, and grazing animals to increase the production efficiency and sustainability of the entire system. Synchronizing light availability for forage production with grazing animal production requirements requi...
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2016-01-01
Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be
He, Shi-wei; Song, Rui; Sun, Yang; Li, Hao-dong
2014-01-01
Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable. PMID:25435867
Optimization of large-scale heterogeneous system-of-systems models.
Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.
2012-01-01
Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.
On point spread function modelling: towards optimal interpolation
NASA Astrophysics Data System (ADS)
Bergé, Joel; Price, Sedona; Amara, Adam; Rhodes, Jason
2012-01-01
Point spread function (PSF) modelling is a central part of any astronomy data analysis relying on measuring the shapes of objects. It is especially crucial for weak gravitational lensing, in order to beat down systematics and allow one to reach the full potential of weak lensing in measuring dark energy. A PSF modelling pipeline is made of two main steps: the first one is to assess its shape on stars, and the second is to interpolate it at any desired position (usually galaxies). We focus on the second part, and compare different interpolation schemes, including polynomial interpolation, radial basis functions, Delaunay triangulation and Kriging. For that purpose, we develop simulations of PSF fields, in which stars are built from a set of basis functions defined from a principal components analysis of a real ground-based image. We find that Kriging gives the most reliable interpolation, significantly better than the traditionally used polynomial interpolation. We also note that although a Kriging interpolation on individual images is enough to control systematics at the level necessary for current weak lensing surveys, more elaborate techniques will have to be developed to reach future ambitious surveys' requirements.
Optimization of Experimental Design for Estimating Groundwater Pumping Using Model Reduction
NASA Astrophysics Data System (ADS)
Ushijima, T.; Cheng, W.; Yeh, W. W.
2012-12-01
An optimal experimental design algorithm is developed to choose locations for a network of observation wells for estimating unknown groundwater pumping rates in a confined aquifer. The design problem can be expressed as an optimization problem which employs a maximal information criterion to choose among competing designs subject to the specified design constraints. Because of the combinatorial search required in this optimization problem, given a realistic, large-scale groundwater model, the dimensionality of the optimal design problem becomes very large and can be difficult if not impossible to solve using mathematical programming techniques such as integer programming or the Simplex with relaxation. Global search techniques, such as Genetic Algorithms (GAs), can be used to solve this type of combinatorial optimization problem; however, because a GA requires an inordinately large number of calls of a groundwater model, this approach may still be infeasible to use to find the optimal design in a realistic groundwater model. Proper Orthogonal Decomposition (POD) is therefore applied to the groundwater model to reduce the model space and thereby reduce the computational burden of solving the optimization problem. Results for a one-dimensional test case show identical results among using GA, integer programming, and an exhaustive search demonstrating that GA is a valid method for use in a global optimum search and has potential for solving large-scale optimal design problems. Additionally, other results show that the algorithm using GA with POD model reduction is several orders of magnitude faster than an algorithm that employs GA without POD model reduction in terms of time required to find the optimal solution. Application of the proposed methodology is being made to a large-scale, real-world groundwater problem.
Effects of noise variance model on optimal feedback design and actuator placement
NASA Technical Reports Server (NTRS)
Ruan, Mifang; Choudhury, Ajit K.
1994-01-01
In optimal placement of actuators for stochastic systems, it is commonly assumed that the actuator noise variances are not related to the feedback matrix and the actuator locations. In this paper, we will discuss the limitation of that assumption and develop a more practical noise variance model. Various properties associated with optimal actuator placement under the assumption of this noise variance model are discovered through the analytical study of a second order system.
Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V
2016-05-01
This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values. PMID:27094057
Shape Optimization and Supremal Minimization Approaches in Landslides Modeling
Hassani, Riad Ionescu, Ioan R. Lachand-Robert, Thomas
2005-10-15
The steady-state unidirectional (anti-plane) flow for a Bingham fluid is considered. We take into account the inhomogeneous yield limit of the fluid, which is well adjusted to the description of landslides. The blocking property is analyzed and we introduce the safety factor which is connected to two optimization problems in terms of velocities and stresses. Concerning the velocity analysis the minimum problem in Bv({omega}) is equivalent to a shape-optimization problem. The optimal set is the part of the land which slides whenever the loading parameter becomes greater than the safety factor. This is proved in the one-dimensional case and conjectured for the two-dimensional flow. For the stress-optimization problem we give a stream function formulation in order to deduce a minimum problem in W{sup 1,{infinity}}({omega}) and we prove the existence of a minimizer. The L{sup p}({omega}) approximation technique is used to get a sequence of minimum problems for smooth functionals. We propose two numerical approaches following the two analysis presented before.First, we describe a numerical method to compute the safety factor through equivalence with the shape-optimization problem.Then the finite-element approach and a Newton method is used to obtain a numerical scheme for the stress formulation. Some numerical results are given in order to compare the two methods. The shape-optimization method is sharp in detecting the sliding zones but the convergence is very sensitive to the choice of the parameters. The stress-optimization method is more robust, gives precise safety factors but the results cannot be easily compiled to obtain the sliding zone.
Nie, Yongfeng; Li, Tianwei; Yan, Gang; Wang, Yeyao; Ma, Xiaofan
2004-02-01
Based on the basic characteristics of municipal solid waste (MSW) from regional small cities in China, some optimal management principles have been put forward: regional optimization, long-term optimization, and integrated treatment/disposal optimization. According to these principles, an optimal MSW management model for regional small cities is developed and provides a useful method to manage MSW from regional small cities. A case study application of the optimal model is described and shows that the optimal management scenarios in the controlling region can be gained, adequately validating and accounting for the advantages of the optimal model. PMID:14977320
Optimization Control of the Color-Coating Production Process for Model Uncertainty.
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563
Optimization Control of the Color-Coating Production Process for Model Uncertainty
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563
Locating monitoring wells in groundwater systems using embedded optimization and simulation models.
Bashi-Azghadi, Seyyed Nasser; Kerachian, Reza
2010-04-15
In this paper, a new methodology is proposed for optimally locating monitoring wells in groundwater systems in order to identify an unknown pollution source using monitoring data. The methodology is comprised of two different single and multi-objective optimization models, a Monte Carlo analysis, MODFLOW, MT3D groundwater quantity and quality simulation models and a Probabilistic Support Vector Machine (PSVM). The single-objective optimization model, which uses the results of the Monte Carlo analysis and maximizes the reliability of contamination detection, provides the initial location of monitoring wells. The objective functions of the multi-objective optimization model are minimizing the monitoring cost, i.e. the number of monitoring wells, maximizing the reliability of contamination detection and maximizing the probability of detecting an unknown pollution source. The PSVMs are calibrated and verified using the results of the single-objective optimization model and the Monte Carlo analysis. Then, the PSVMs are linked with the multi-objective optimization model, which maximizes both the reliability of contamination detection and probability of detecting an unknown pollution source. To evaluate the efficiency and applicability of the proposed methodology, it is applied to Tehran Refinery in Iran. PMID:20189633
Duffy, R.; Shayesteh, M.
2011-01-07
In this review paper the challenges that face doping optimization in 3-dimensional (3D) thin-body silicon devices will be discussed, within the context of material science studies, metrology methodologies, process modeling insight, ultimately leading to optimized device performance. The focus will be on ion implantation at the method to introduce the dopants to the target material.
Pump-and-treat optimization using analytic element method flow models
NASA Astrophysics Data System (ADS)
Matott, L. Shawn; Rabideau, Alan J.; Craig, James R.
2006-05-01
Plume containment using pump-and-treat (PAT) technology continues to be a popular remediation technique for sites with extensive groundwater contamination. As such, optimization of PAT systems, where cost is minimized subject to various remediation constraints, is the focus of an important and growing body of research. While previous pump-and-treat optimization (PATO) studies have used discretized (finite element or finite difference) flow models, the present study examines the use of analytic element method (AEM) flow models. In a series of numerical experiments, two PATO problems adapted from the literature are optimized using a multi-algorithmic optimization software package coupled with an AEM flow model. The experiments apply several different optimization algorithms and explore the use of various pump-and-treat cost and constraint formulations. The results demonstrate that AEM models can be used to optimize the number, locations and pumping rates of wells in a pump-and-treat containment system. Furthermore, the results illustrate that a total outflux constraint placed along the plume boundary can be used to enforce plume containment. Such constraints are shown to be efficient and reliable alternatives to conventional particle tracking and gradient control techniques. Finally, the particle swarm optimization (PSO) technique is identified as an effective algorithm for solving pump-and-treat optimization problems. A parallel version of the PSO algorithm is shown to have linear speedup, suggesting that the algorithm is suitable for application to problems that are computationally demanding and involve large numbers of wells.
What's in a Grammar? Modeling Dominance and Optimization in Contact
ERIC Educational Resources Information Center
Sharma, Devyani
2013-01-01
Muysken's article is a timely call for us to seek deeper regularities in the bewildering diversity of language contact outcomes. His model provocatively suggests that most such outcomes can be subsumed under four speaker optimization strategies. I consider two aspects of the proposal here: the formalization in Optimality Theory (OT) and the…
A Markov decision model for determining optimal outpatient scheduling.
Patrick, Jonathan
2012-06-01
Managing an efficient outpatient clinic can often be complicated by significant no-show rates and escalating appointment lead times. One method that has been proposed for avoiding the wasted capacity due to no-shows is called open or advanced access. The essence of open access is "do today's demand today". We develop a Markov Decision Process (MDP) model that demonstrates that a short booking window does significantly better than open access. We analyze a number of scenarios that explore the trade-off between patient-related measures (lead times) and physician- or system-related measures (revenue, overtime and idle time). Through simulation, we demonstrate that, over a wide variety of potential scenarios and clinics, the MDP policy does as well or better than open access in terms of minimizing costs (or maximizing profits) as well as providing more consistent throughput. PMID:22089944
MOGO: Model-Oriented Global Optimization of Petascale Applications
Malony, Allen D.; Shende, Sameer S.
2012-09-14
The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.
An engagement model of cognitive optimization through adulthood.
Stine-Morrow, Elizabeth A L; Parisi, Jeanine M; Morrow, Daniel G; Greene, Jennifer; Park, Denise C
2007-06-01
The engagement hypothesis suggests that social and intellectual engagement may buffer age-related declines in intellectual functioning. At the same time, some have argued that social structures that afford opportunities for intellectual engagement throughout the life span have lagged behind the demographic shift toward an expanding older population. Against this backdrop, we developed the Senior Odyssey, an existing team-based program of creative problem solving. We tested the engagement hypothesis in a field experiment. Relative to controls, Senior Odyssey participants showed improved speed of processing, marginally improved divergent thinking, and higher levels of mindfulness and need for cognition after the program. This pilot translational project suggests that the Senior Odyssey program may serve as one effective model of engagement with good scaling-up potential. PMID:17565166
Mathematical models and lymphatic filariasis control: endpoints and optimal interventions.
Michael, Edwin; Malecela-Lazaro, Mwele N; Kabali, Conrad; Snow, Lucy C; Kazura, James W
2006-05-01
The current global initiative to eliminate lymphatic filariasis is a major renewed commitment to reduce or eliminate the burden of one of the major helminth infections from resource-poor communities of the world. Mathematical models of filariasis transmission can serve as an effective tool for guiding the scientific development and management of successful community-level intervention programmes by acting as analytical frameworks for integrating knowledge regarding parasite transmission dynamics with programmatic factors. However, the power of these tools for supporting control interventions will be realized fully only if researchers address the current uncertainties and gaps in data and knowledge of filarial population dynamics and the effectiveness of currently proposed filariasis intervention options. PMID:16564745
Model reduction using new optimal Routh approximant technique
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Guo, Tong-Yi; Sheih, Leang-San
1992-01-01
An optimal Routh approximant of a single-input single-output dynamic system is a reduced-order transfer function of which the denominator is obtained by the Routh approximation method while the numerator is determined by minimizing a time-response integral-squared-error (ISE) criterion. In this paper, a new elegant approach is presented for obtaining the optimal Routh approximants for linear time-invariant continuous-time systems. The approach is based on the Routh canonical expansion, which is a finite-term orthogonal series of rational basis functions, and minimization of the ISE criterion. A procedure for combining the above approach with the bilinear transformation is also presented in order to obtain the optimal bilinear Routh approximants of linear time-invariant discrete-time systems. The proposed technique is simple in formulation and is amenable to practical implementation.
NASA Astrophysics Data System (ADS)
Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas
2016-04-01
The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time
Phonetically optimized speaker modeling for robust speaker recognition.
Lee, Bong-Jin; Choi, Jeung-Yoon; Kang, Hong-Goo
2009-09-01
This paper proposes an efficient method to improve speaker recognition performance by dynamically controlling the ratio of phoneme class information. It utilizes the fact that each phoneme contains different amounts of speaker discriminative information that can be measured by mutual information. After classifying phonemes into five classes, the optimal ratio of each class in both training and testing processes is adjusted using a non-linear optimization technique, i.e., the Nelder-Mead method. Speaker identification results verify that the proposed method achieves 18% improvement in terms of error rate compared to a baseline system. PMID:19739699
Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh
1998-01-01
In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.
Optimization models and techniques for implementation and pricing of electricity markets
NASA Astrophysics Data System (ADS)
Madrigal Martinez, Marcelino
Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and
Automated optimization of a reduced layer 5 pyramidal cell model based on experimental data.
Bahl, Armin; Stemmler, Martin B; Herz, Andreas V M; Roth, Arnd
2012-09-15
The construction of compartmental models of neurons involves tuning a set of parameters to make the model neuron behave as realistically as possible. While the parameter space of single-compartment models or other simple models can be exhaustively searched, the introduction of dendritic geometry causes the number of parameters to balloon. As parameter tuning is a daunting and time-consuming task when performed manually, reliable methods for automatically optimizing compartmental models are desperately needed, as only optimized models can capture the behavior of real neurons. Here we present a three-step strategy to automatically build reduced models of layer 5 pyramidal neurons that closely reproduce experimental data. First, we reduce the pattern of dendritic branches of a detailed model to a set of equivalent primary dendrites. Second, the ion channel densities are estimated using a multi-objective optimization strategy to fit the voltage trace recorded under two conditions - with and without the apical dendrite occluded by pinching. Finally, we tune dendritic calcium channel parameters to model the initiation of dendritic calcium spikes and the coupling between soma and dendrite. More generally, this new method can be applied to construct families of models of different neuron types, with applications ranging from the study of information processing in single neurons to realistic simulations of large-scale network dynamics. PMID:22524993
NASA Astrophysics Data System (ADS)
Venkataraman, Satchithanandam
The design of reusable launch vehicles is driven by the need for minimum weight structures. Preliminary design of reusable launch vehicles requires many optimizations to select among competing structural concepts. Accurate models and analysis methods are required for such structural optimizations. Model, analysis, and optimization complexities have to be compromised to meet constraints on design cycle time and computational resources. Stiffened panels used in reusable launch vehicle tanks exhibit complex buckling failure modes. Using detailed finite element models for buckling analysis is too expensive for optimization. Many approximate models and analysis methods have been developed for design of stiffened panels. This dissertation investigates the use of approximate models and analysis methods implemented in PANDA2 software for preliminary design of stiffened panels. PANDA2 is also used for a trade study to compare weight efficiencies of stiffened panel concepts for a liquid hydrogen tank of a reusable launch vehicle. Optimum weights of stiffened panels are obtained for different materials, constructions and stiffener geometry. The study investigates the influence of modeling and analysis choices in PANDA2 on optimum designs. Complex structures usually require finite element analysis models to capture the details of their response. Design of complex structures must account for failure modes that are both global and local in nature. Often, different analysis models or computer programs are employed to calculate global and local structural response. Integration of different analysis programs is cumbersome and computationally expensive. Response surface approximation provides a global polynomial approximation that filters numerical noise present in discretized analysis models. The computational costs are transferred from optimization to development of approximate models. Using this process, the analyst can create structural response models that can be used by
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of
Technology Transfer Automated Retrieval System (TEKTRAN)
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
Technology Transfer Automated Retrieval System (TEKTRAN)
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
Perceived and Implicit Ranking of Academic Journals: An Optimization Choice Model
ERIC Educational Resources Information Center
Xie, Frank Tian; Cai, Jane Z.; Pan, Yue
2012-01-01
A new system of ranking academic journals is proposed in this study and optimization choice model used to analyze data collected from 346 faculty members in a business discipline. The ranking model uses the aggregation of perceived, implicit sequencing of academic journals by academicians, therefore eliminating several key shortcomings of previous…
An approach to the multi-axis problem in manual control. [optimal pilot model
NASA Technical Reports Server (NTRS)
Harrington, W. W.
1977-01-01
The multiaxis control problem is addressed within the context of the optimal pilot model. The problem is developed to provide efficient adaptation of the optimal pilot model to complex aircraft systems and real world, multiaxis tasks. This is accomplished by establishing separability of the longitudinal and lateral control problems subject to the constraints of multiaxis attention and control allocation. Control solution adaptation to the constrained single axis attention allocations is provided by an optimal control frequency response algorithm. An algorithm is developed to solve the multiaxis control problem. The algorithm is then applied to an attitude hold task for a bare airframe fighter aircraft case with interesting multiaxis properties.
NASA Astrophysics Data System (ADS)
Billy, Frédérique; Clairambault, Jean; Fercoq, Olivier; Lorenzi, Tommaso; Lorz, Alexander; Perthame, Benoît
2012-09-01
The main two pitfalls of therapeutics in clinical oncology, that limit increasing drug doses, are unwanted toxic side effects on healthy cell populations and occurrence of resistance to drugs in cancer cell populations. Depending on the constraint considered in the control problem at stake, toxicity or drug resistance, we present two different ways to model the evolution of proliferating cell populations, healthy and cancer, under the control of anti-cancer drugs. In the first case, we use a McKendrick age-structured model of the cell cycle, whereas in the second case, we use a model of evolutionary dynamics, physiologically structured according to a continuous phenotype standing for drug resistance. In both cases, we mention how drug targets may be chosen so as to accurately represent the effects of cytotoxic and of cytostatic drugs, separately, and how one may consider the problem of optimisation of combined therapies.
Wind Tunnel Management and Resource Optimization: A Systems Modeling Approach
NASA Technical Reports Server (NTRS)
Jacobs, Derya, A.; Aasen, Curtis A.
2000-01-01
Time, money, and, personnel are becoming increasingly scarce resources within government agencies due to a reduction in funding and the desire to demonstrate responsible economic efficiency. The ability of an organization to plan and schedule resources effectively can provide the necessary leverage to improve productivity, provide continuous support to all projects, and insure flexibility in a rapidly changing environment. Without adequate internal controls the organization is forced to rely on external support, waste precious resources, and risk an inefficient response to change. Management systems must be developed and applied that strive to maximize the utility of existing resources in order to achieve the goal of "faster, cheaper, better". An area of concern within NASA Langley Research Center was the scheduling, planning, and resource management of the Wind Tunnel Enterprise operations. Nine wind tunnels make up the Enterprise. Prior to this research, these wind tunnel groups did not employ a rigorous or standardized management planning system. In addition, each wind tunnel unit operated from a position of autonomy, with little coordination of clients, resources, or project control. For operating and planning purposes, each wind tunnel operating unit must balance inputs from a variety of sources. Although each unit is managed by individual Facility Operations groups, other stakeholders influence wind tunnel operations. These groups include, for example, the various researchers and clients who use the facility, the Facility System Engineering Division (FSED) tasked with wind tunnel repair and upgrade, the Langley Research Center (LaRC) Fabrication (FAB) group which fabricates repair parts and provides test model upkeep, the NASA and LARC Strategic Plans, and unscheduled use of the facilities by important clients. Expanding these influences horizontally through nine wind tunnel operations and vertically along the NASA management structure greatly increases the
Optimization of Evaporative Demand Models for Seasonal Drought Forecasting
NASA Astrophysics Data System (ADS)
McEvoy, D.; Huntington, J. L.; Hobbins, M.
2015-12-01
Providing reliable seasonal drought forecasts continues to pose a major challenge for scientists, end-users, and the water resources and agricultural communities. Precipitation (Prcp) forecasts beyond weather time scales are largely unreliable, so exploring new avenues to improve seasonal drought prediction is necessary to move towards applications and decision-making based on seasonal forecasts. A recent study has shown that evaporative demand (E0) anomaly forecasts from the Climate Forecast System Version 2 (CFSv2) are consistently more skillful than Prcp anomaly forecasts during drought events over CONUS, and E0 drought forecasts may be particularly useful during the growing season in the farming belts of the central and Midwestern CONUS. For this recent study, we used CFSv2 reforecasts to assess the skill of E0 and of its individual drivers (temperature, humidity, wind speed, and solar radiation), using the American Society for Civil Engineers Standardized Reference Evapotranspiration (ET0) Equation. Moderate skill was found in ET0, temperature, and humidity, with lesser skill in solar radiation, and no skill in wind. Therefore, forecasts of E0 based on models with no wind or solar radiation inputs may prove to be more skillful than the ASCE ET0. For this presentation we evaluate CFSv2 E0 reforecasts (1982-2009) from three different E0 models: (1) ASCE ET0; (2) Hargreaves and Samani (ET-HS), which is estimated from maximum and minimum temperature alone; and (3) Valiantzas (ET-V), which is a modified version of the Penman method for use when wind speed data are not available (or of poor quality) and is driven only by temperature, humidity, and solar radiation. The University of Idaho's gridded meteorological data (METDATA) were used as observations to evaluate CFSv2 and also to determine if ET0, ET-HS, and ET-V identify similar historical drought periods. We focus specifically on CFSv2 lead times of one, two, and three months, and season one forecasts; which are
Models for optimal harvest with convex function of growth rate of a population
Lyashenko, O.I.
1995-12-10
Two models for growth of a population, which are described by a Cauchy problem for an ordinary differential equation with right-hand side depending on the population size and time, are investigated. The first model is time-discrete, i.e., the moments of harvest are fixed and discrete. The second model is time-continuous, i.e., a crop is harvested continuously in time. For autonomous systems, the second model is a particular case of the variational model for optimal control with constraints investigated in. However, the prerequisites and the method of investigation are somewhat different, for they are based on Lemma 1 presented below. In this paper, the existence and uniqueness theorem for the solution of the discrete and continuous problems of optimal harvest is proved, and the corresponding algorithms are presented. The results obtained are illustrated by a model for growth of the light-requiring green alga Chlorella.
NASA Astrophysics Data System (ADS)
Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen
2015-12-01
Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.
NASA Astrophysics Data System (ADS)
Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen
2016-07-01
Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.
Peck, J A
2016-01-01
Drug addiction is a significant health and societal problem for which there is no highly effective long-term behavioral or pharmacological treatment. A rising concern are the use of illegal opiate drugs such as heroin and the misuse of legally available pain relievers that have led to serious deleterious health effects or even death. Therefore, treatment strategies that prolong opiate abstinence should be the primary focus of opiate treatment. Further, because the factors that support abstinence in humans and laboratory animals are similar, several animal models of abstinence and relapse have been developed. Here, we review a few animal models of abstinence and relapse and evaluate their validity and utility in addressing human behavior that leads to long-term drug abstinence. Then, a novel abstinence "conflict" model that more closely mimics human drug-seeking episodes by incorporating negative consequences for drug seeking (as are typical in humans, eg, incarceration and job loss) and while the drug remains readily available is discussed. Additionally, recent research investigating both cocaine and heroin seeking in rats using the animal conflict model is presented and the implications for heroin treatments are examined. Finally, it is argued that the use of animal abstinence/relapse models that more closely approximate human drug addiction, such as the abstinence-conflict model, could lead to a better understanding of the neurobiological and environmental factors that support long-term drug abstinence. In turn, this will lead to the development of more effective environmental and pharmacotherapeutic interventions to treat opiate addiction and addiction to other drugs of abuse. PMID:27055619
Multi-objective optimization of gear forging process based on adaptive surrogate meta-models
NASA Astrophysics Data System (ADS)
Meng, Fanjuan; Labergere, Carl; Lafon, Pascal; Daniel, Laurent
2013-05-01
In forging industry, net shape or near net shape forging of gears has been the subject of considerable research effort in the last few decades. So in this paper, a multi-objective optimization methodology of net shape gear forging process design has been discussed. The study is mainly done in four parts: building parametric CAD geometry model, simulating the forging process, fitting surrogate meta-models and optimizing the process by using an advanced algorithm. In order to maximally appropriate meta-models of the real response, an adaptive meta-model based design strategy has been applied. This is a continuous process: first, bui Id a preliminary version of the meta-models after the initial simulated calculations; second, improve the accuracy and update the meta-models by adding some new representative samplings. By using this iterative strategy, the number of the initial sample points for real numerical simulations is greatly decreased and the time for the forged gear design is significantly shortened. Finally, an optimal design for an industrial application of a 27-teeth gear forging process was introduced, which includes three optimization variables and two objective functions. A 3D FE nu merical simulation model is used to realize the process and an advanced thermo-elasto-visco-plastic constitutive equation is considered to represent the material behavior. The meta-model applied for this example is kriging and the optimization algorithm is NSGA-II. At last, a relatively better Pareto optimal front (POF) is gotten with gradually improving the obtained surrogate meta-models.
Gong, P. . Dept. of Forest Economics)
1998-08-01
Different decision models can be constructed and used to analyze a regeneration decision in even-aged stand management. However, the optimal decision and management outcomes determined in an analysis may depend on the decision model used in the analysis. This paper examines the proper choice of decision model for determining the optimal planting density and land expectation value (LEV) for a Scots pine (Pinus sylvestris L.) plantation in northern Sweden. First, a general adaptive decision model for determining the regeneration alternative that maximizes the LEV is presented. This model recognizes future stand state and timber price uncertainties by including multiple stand state and timber price scenarios, and assumes that the harvest decision in each future period will be made conditional on the observed stand state and timber prices. Alternative assumptions about future stand states, timber prices, and harvest decisions can be incorporated into this general decision model, resulting in several different decision models that can be used to analyze a specific regeneration problem. Next, the consequences of choosing different modeling assumptions are determined using the example Scots pine plantation problem. Numerical results show that the most important sources of uncertainty that affect the optimal planting density and LEV are variations of the optimal clearcut time due to short-term fluctuations of timber prices. It is appropriate to determine the optimal planting density and harvest policy using an adaptive decision model that recognizes uncertainty only in future timber prices. After the optimal decisions have been found, however, the LEV should be re-estimated by incorporating both future stand state and timber price uncertainties.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409
NASA Astrophysics Data System (ADS)
Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.
2005-01-01
Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.
CFD modeling could optimize sorbent injection system efficiency
Blankinship, S.
2006-01-15
Several technologies will probably be needed to remove mercury from coal-plant stack emissions as mandated by new mercury emission control legislation in the USA. One of the most promising mercury removal approaches is the injection of a sorbent, such as powdered activated carbon (PAC), to make it much more controllable. ADA-ES recently simulated field tests of sorbent injection at New England Power Company's Brayton Point Power Plant in Somerset, Mass., where activated carbon sorbent was injected using a set of eight lances upstream of the second of two electrostatic precipitators (ESPs). Consultants from Fluent created a computational model of the ductwork and injection lances. The simulation results showed that the flue gas flow was poorly distributed at the sorbent injection plane, and that a small region of reverse flow occurred, a result of the flow pattern at the exit of the first ESP. The results also illustrated that the flow was predominantly in the lower half of the duct, and affected by some upstream turning vanes. The simulations demonstrated the value of CFD as a diagnostic tool. They were performed in a fraction of the time and cost required for the physical tests yet provided far more diagnostic information, such as the distribution of mercury and sorbent at each point in the computational domain. 1 fig.
Towards optimization in digital chest radiography using Monte Carlo modelling.
Ullman, Gustaf; Sandborg, Michael; Dance, David R; Hunt, Roger A; Alm Carlsson, Gudrun
2006-06-01
A Monte Carlo based computer model of the x-ray imaging system was used to investigate how various image quality parameters of interest in chest PA radiography and the effective dose E vary with tube voltage (90-150 kV), additional copper filtration (0-0.5 mm), anti-scatter method (grid ratios 8-16 and air gap lengths 20-40 cm) and patient thickness (20-28 cm) in a computed radiography (CR) system. Calculated quantities were normalized to a fixed value of air kerma (5.0 microGy) at the automatic exposure control chambers. Soft-tissue nodules were positioned at different locations in the anatomy and calcifications in the apical region. The signal-to-noise ratio, SNR, of the nodules and the nodule contrast relative to the contrast of bone (C/C(B)) as well as relative to the dynamic range in the image (C(rel)) were used as image quality measures. In all anatomical regions, except in the densest regions in the thickest patients, the air gap technique provides higher SNR and contrast ratios than the grid technique and at a lower effective dose E. Choice of tube voltage depends on whether quantum noise (SNR) or the contrast ratios are most relevant for the diagnostic task. SNR increases with decreasing tube voltage while C/C(B) increases with increasing tube voltage. PMID:16723762
Modeling and optimization of ultra high speed devices and circuits
Jandaghi-Semnani, M.
1989-01-01
This thesis consists of two parts. In part one, we have developed an optimization scheme for designing submicron metal-oxide-semiconductor field effect transistors (MOSFETs). The scheme, which is based on the concepts of a mathematical programming problem, considers all the necessary performance and reliability issues and attempts to approach a desired set of target values. The modified pattern search method is used to implement the optimization scheme selected in this work. Simulated results have been compared with experimental data, and excellent agreement has been observed. Using the optimization scheme, a 0.6 {mu}m channel length MOSFET for possible dynamic random access memory (DRAM) applications has been designed. The other part of this thesis is devoted to the design of an ultra-fast 8 x 8-bit multiplier/accumulator circuit based on a resonant tunneling transistor (RTT) technology. The multiplier circuit has a parallel architecture and uses the carry save adder technique. The design of all the logic gates of the multiplier/accumulator circuit is based on the three logics: NAND, NOR, and NOT. The number of transistors applied in the RTT circuit is 2371, and the active chip area is about 0.30mm{sup 2}. The multiplier speed is 79 ps with an average power dissipation of 2.28 miliwatts (mW). The clock signals required for the operation of the chip are generated by a clock driver circuit which was designed by a ring oscillator and a binary counter circuit.
Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.
The human operator in manual preview tracking /an experiment and its modeling via optimal control/
NASA Technical Reports Server (NTRS)
Tomizuka, M.; Whitney, D. E.
1976-01-01
A manual preview tracking experiment and its results are presented. The preview drastically improves the tracking performance compared to zero-preview tracking. Optimal discrete finite preview control is applied to determine the structure of a mathematical model of the manual preview tracking experiment. Variable parameters in the model are adjusted to values which are consistent to the published data in manual control. The model with the adjusted parameters is found to be well correlated to the experimental results.
NASA Astrophysics Data System (ADS)
Shoemaker, C. A.; Singh, A.
2008-12-01
This paper will describe some new optimization algorithms and their application to hydrologic models. The approaches include a parallel version of a new heuristic algorithm combined with tabu search and a mathematically derived global optimization method that is based on trust region methods. The goals of these methods are to find optimal solutions to calibration problems and to design problems with relatively few simulations or (in a parallel environment) relatively little wallclock time. This is important because currently it is not possible to apply global optimization methods like genetic algorithms to computationally expensive simulation models like partial differential equations (with many nodes in groundwater) because it is not feasible to do thousands of simulations to evaluate the objective/fitness function. Results of the application of the algorithms to some complex models of groundwater contamination and phosphorous transport in watersheds will be presented.
An effective model for ergonomic optimization applied to a new automotive assembly line
NASA Astrophysics Data System (ADS)
Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio
2016-06-01
An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.
NASA Astrophysics Data System (ADS)
Sarac, Vasilija; Atanasova-Pacemska, Tatjana; Minovski, Dragan; Cogelja, Goran; Smitková, Miroslava; Schulze, Christian
2015-01-01
The method of genetic algorithms is used to optimize the efficiency factor of two objects: single a phase shaded pole motor and the main inductor for an LCL filter, aimed for independent operation. By varying the construction parameters, three motors and two inductor models have been designed and optimized. The optimized motors exhibited a gradual increase of the efficiency factor achieved for the same input power. Also an increased output power has been achieved, which considerably improved the low efficiency factor for this type of the motor. The optimized filter models have an increased efficiency due to the lower losses and a decreased warm-up. All models are evaluated by the finite element method, which allows to plot the magnetic flux density distribution in the cross section and hereby the possible weak parts of the construction with a high flux density can be discovered.
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Ming; Ding, Han
2008-11-01
The concept of uncertainty plays an important role in the design of practical mechanical system. The most common methods for solving uncertainty problems are to model the parameters as a random vector. A natural way to handle the randomness is to admit that a given probability density function represents the uncertainty distribution. However, the drawback of this approach is that the probability distribution is difficult to obtain. In this paper, we use the non-probabilistic convex model to deal with the uncertain parameters in which there is no need for probability density functions. Using the convex model theory, a new method to optimize the dynamic response of mechanical system with uncertain parameters is derived. Because the uncertain parameters can be selected as the optimization parameters, the present method can provide more information about the optimization results than those obtained by the deterministic optimization. The present method is implemented for a torsional vibration system. The numerical results show that the method is effective.
Algorithms of D-optimal designs for Morgan Mercer Flodin (MMF) models with three parameters
NASA Astrophysics Data System (ADS)
Widiharih, Tatik; Haryatmi, Sri; Gunardi, Wilandari, Yuciana
2016-02-01
Morgan Mercer Flodin (MMF) model is used in many areas including biological growth studies, animal and husbandry, chemistry, finance, pharmacokinetics and pharmacodynamics. Locally D-optimal designs for Morgan Mercer Flodin (MMF) models with three parameters are investigated. We used the Generalized Equivalence Theorem of Kiefer and Wolvowitz to determine D-optimality criteria. Number of roots for standardized variance are determined using Tchebysheff system concept and it is used to decide that the design is minimally supported design. In these models, designs are minimally supported designs with uniform weight on its support, and the upper bound of the design region is a support point.
A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation
Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin
2016-01-01
This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.
Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach
Duarte, Belmiro P. M.; Wong, Weng Kee
2014-01-01
Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159
A system-level cost-of-energy wind farm layout optimization with landowner modeling
Chen, Le; MacDonald, Erin
2013-10-01
This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under two land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.
Improving flash flood forecasting with distributed hydrological model by parameter optimization
NASA Astrophysics Data System (ADS)
Chen, Yangbo
2016-04-01
In China, flash food is usually regarded as flood occured in small and medium sized watersheds with drainage area less than 200 km2, and is mainly induced by heavy rains, and occurs in where hydrological observation is lacked. Flash flood is widely observed in China, and is the flood causing the most casualties nowadays in China. Due to hydrological data scarcity, lumped hydrological model is difficult to be employed for flash flood forecasting which requires lots of observed hydrological data to calibrate model parameters. Physically based distributed hydrological model discrete the terrain of the whole watershed into a number of grid cells at fine resolution, assimilate different terrain data and precipitation to different cells, and derive model parameteris from the terrain properties, thus having the potential to be used in flash flood forecasting and improving flash flood prediction capability. In this study, the Liuxihe Model, a physically based distributed hydrological model mainly proposed for watershed flood forecasting is employed to simulate flash floods in the Ganzhou area in southeast China, and models have been set up in 5 watersheds. Model parameters have been derived from the terrain properties including the DEM, the soil type and land use type, but the result shows that the flood simulation uncertainty is high, which may be caused by parameter uncertainty, and some kind of uncertainty control is needed before the model could be used in real-time flash flood forecastin. Considering currently many Chinese small and medium sized watersheds has set up hydrological observation network, and a few flood events could be collected, it may be used for model parameter optimization. For this reason, an automatic model parameter optimization algorithm using Particle Swam Optimization(PSO) is developed to optimize the model parameters, and it has been found that model parameters optimized even only with one observed flood events could largely reduce the flood
Modeling and optimization of laser beam percussion drilling of thin aluminum sheet
NASA Astrophysics Data System (ADS)
Mishra, Sanjay; Yadava, Vinod
2013-06-01
Modeling and optimization of machining processes using coupled methodology has been an area of interest for manufacturing engineers in recent times. The present paper deals with the development of a prediction model for Laser Beam Percussion Drilling (LBPD) using the coupled methodology of Finite Element Method (FEM) and Artificial Neural Network (ANN). First, 2D axisymmetric FEM based thermal models for LBPD have been developed, incorporating the temperature-dependent thermal properties, optical properties, and phase change phenomena of aluminum. The model is validated after comparing the results obtained using the FEM model with self-conducted experimental results in terms of hole taper. Secondly, sufficient input and output data generated using the FEM model is used for the training and testing of the ANN model. Further, Grey Relational Analysis (GRA) coupled with Principal Component Analysis (PCA) has been effectively used for the multi-objective optimization of the LBPD process using data predicted by the trained ANN model. The developed ANN model predicts that hole taper and material removal rates are highly affected by pulse width, whereas the pulse frequency plays the most significant role in determining the extent of HAZ. The optimal process parameter setting shows a reduction of hole taper by 67.5%, increase of material removal rate by 605%, and reduction of extent of HAZ by 3.24%.
Automated optimization of water-water interaction parameters for a coarse-grained model.
Fogarty, Joseph C; Chiu, See-Wing; Kirby, Peter; Jakobsson, Eric; Pandit, Sagar A
2014-02-13
We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder-Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506
Automated Optimization of Water–Water Interaction Parameters for a Coarse-Grained Model
2015-01-01
We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder–Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506
Ore-blending optimization model for sintering process based on characteristics of iron ores
NASA Astrophysics Data System (ADS)
Wu, Sheng-Li; Oliveira, Dauter; Dai, Yu-Ming; Xu, Jian
2012-03-01
An ore-blending optimization model for the sintering process is an intelligent system that includes iron ore characteristics, expert knowledge and material balance. In the present work, 14 indices are proposed to represent chemical composition, granulating properties and high temperature properties of iron ores. After the relationships between iron ore characteristics and sintering performance are established, the "two-step" method and the simplex method are introduced to build the model by distinguishing the calculation of optimized blending proportion of iron ores from that of other sintering materials in order to improve calculation efficiency. The ore-blending optimization model, programmed by Access and Visual Basic, is applied to practical production in steel mills and the results prove that the present model can take advantage of the available iron ore resource with stable sinter yield and quality performance but at a lower cost.
NASA Astrophysics Data System (ADS)
Quental, Paulo; Almeida, José António; Simões, Manuela
2012-04-01
Despite inequalities in spatial resolution between stochastic geological models and flow simulator models, geostatistical algorithms are used for the characterisation of groundwater systems. From available data to grid-block hydraulic parameters, workflows basically utilise the development of a detailed geostatistical model (morphology and properties) followed by upscaling. This work aims to design and test a two-step methodology encompassing the generation of a high-resolution 3D stochastic geological model and simplification into a low-resolution groundwater layer-type model. First, a high-resolution 3D stochastic model of rock types or hydrofacies (sets of rock types with similar hydraulic characteristics) is generated using an enhanced version of the sequential indicator simulation (SIS) with corrections for local probabilities and for two- and three-point template statistics. In a second step, the high-resolution geological model provided by SIS is optimally simplified into a small set of layers according to a supervised simulated annealing (SA) optimisation procedure and at the end equivalent hydraulic properties are upscaled. Two outcomes are provided by this methodology: (1) a regular 2D mesh of the top and bottom limits of each hydrogeological unit or layer from a conceptual model and (2), for each layer, a 2D grid-block of equivalent hydraulic parameters prepared to be inputted into an aquifer simulator. This methodology was tested for the upper aquifer area of SPEL (Sociedade Portuguesa de Explosivos), an explosives deactivation plant in Seixal municipality, Portugal.
González-Sáiz, José M; Pizarro, Consuelo; Garrido-Vidal, Diego
2003-01-01
The most important kinetic models developed for acetic fermentation were evaluated to study their ability to explain the behavior of the industrial process of acetification. Each model was introduced into a simulation environment capable of replicating the conditions of the industrial plant. In this paper, it is proven that these models are not suitable to predict the evolution of the industrial fermentation by the comparison of the simulation results with an average sequence calculated from the industrial data. Therefore, a new kinetic model for the industrial acetic fermentation was developed. The kinetic parameters of the model were optimized by a specifically designed genetic algorithm. Only the representative sequence of industrial concentrations of acetic acid was required. The main novelty of the algorithm is the four-composed desirability function that works properly as the response to maximize. The new model developed is capable of explaining the behavior of the industrial process. The predictive ability of the model has been compared with that of the other models studied. PMID:12675605
Computational wing optimization and comparisons with experiment for a semi-span wing model
NASA Technical Reports Server (NTRS)
Waggoner, E. G.; Haney, H. P.; Ballhaus, W. F.
1978-01-01
A computational wing optimization procedure was developed and verified by an experimental investigation of a semi-span variable camber wing model in the NASA Ames Research Center 14 foot transonic wind tunnel. The Bailey-Ballhaus transonic potential flow analysis and Woodward-Carmichael linear theory codes were linked to Vanderplaats constrained minimization routine to optimize model configurations at several subsonic and transonic design points. The 35 deg swept wing is characterized by multi-segmented leading and trailing edge flaps whose hinge lines are swept relative to the leading and trailing edges of the wing. By varying deflection angles of the flap segments, camber and twist distribution can be optimized for different design conditions. Results indicate that numerical optimization can be both an effective and efficient design tool. The optimized configurations had as good or better lift to drag ratios at the design points as the best designs previously tested during an extensive parametric study.
Lee, Soo Min; Lee, Jae-Won
2014-11-01
In this study, the optimal conditions for biomass torrefaction were determined by comparing the gain of energy content to the weight loss of biomass from the final products. Torrefaction experiments were performed at temperatures ranging from 220 to 280°C using 20-80min reaction times. Polynomial regression models ranging from the 1st to the 3rd order were used to determine a relationship between the severity factor (SF) and calorific value or weight loss. The intersection of two regression models for calorific value and weight loss was determined and assumed to be the optimized SF. The optimized SFs on each biomass ranged from 6.056 to 6.372. Optimized torrefaction conditions were determined at various reaction times of 15, 30, and 60min. The average optimized temperature was 248.55°C in the studied biomass when torrefaction was performed for 60min. PMID:25266685
NASA Astrophysics Data System (ADS)
Zhou, Yanlai; Guo, Shenglian; Xu, Chong-Yu; Liu, Dedi; Chen, Lu; Ye, Yushi
2015-12-01
Due to the adaption, dynamic and multi-objective characteristics of complex water resources system, it is a considerable challenge to manage water resources in an efficient, equitable and sustainable way. An integrated optimal allocation model is proposed for complex adaptive system of water resources management. The model consists of three modules: (1) an agent-based module for revealing evolution mechanism of complex adaptive system using agent-based, system dynamic and non-dominated sorting genetic algorithm II methods, (2) an optimal module for deriving decision set of water resources allocation using multi-objective genetic algorithm, and (3) a multi-objective evaluation module for evaluating the efficiency of the optimal module and selecting the optimal water resources allocation scheme using project pursuit method. This study has provided a theoretical framework for adaptive allocation, dynamic allocation and multi-objective optimization for a complex adaptive system of water resources management.
Modelling of Microalgae Culture Systems with Applications to Control and Optimization.
Bernard, Olivier; Mairet, Francis; Chachuat, Benoît
2016-01-01
Mathematical modeling is becoming ever more important to assess the potential, guide the design, and enable the efficient operation and control of industrial-scale microalgae culture systems (MCS). The development of overall, inherently multiphysics, models involves coupling separate submodels of (i) the intrinsic biological properties, including growth, decay, and biosynthesis as well as the effect of light and temperature on these processes, and (ii) the physical properties, such as the hydrodynamics, light attenuation, and temperature in the culture medium. When considering high-density microalgae culture, in particular, the coupling between biology and physics becomes critical. This chapter reviews existing models, with a particular focus on the Droop model, which is a precursor model, and it highlights the structure common to many microalgae growth models. It summarizes the main developments and difficulties towards multiphysics models of MCS as well as applications of these models for monitoring, control, and optimization purposes. PMID:25604163
Capacity Fade Analysis and Model Based Optimization of Lithium-ion Batteries
NASA Astrophysics Data System (ADS)
Ramadesigan, Venkatasailanathan
Electrochemical power sources have had significant improvements in design, economy, and operating range and are expected to play a vital role in the future in a wide range of applications. The lithium-ion battery is an ideal candidate for a wide variety of applications due to its high energy/power density and operating voltage. Some limitations of existing lithium-ion battery technology include underutilization, stress-induced material damage, capacity fade, and the potential for thermal runaway. This dissertation contributes to the efforts in the modeling, simulation and optimization of lithium-ion batteries and their use in the design of better batteries for the future. While physics-based models have been widely developed and studied for these systems, the rigorous models have not been employed for parameter estimation or dynamic optimization of operating conditions. The first chapter discusses a systems engineering based approach to illustrate different critical issues possible ways to overcome them using modeling, simulation and optimization of lithium-ion batteries. The chapters 2-5, explain some of these ways to facilitate (i) capacity fade analysis of Li-ion batteries using different approaches for modeling capacity fade in lithium-ion batteries, (ii) model based optimal design in Li-ion batteries and (iii) optimum operating conditions (current profile) for lithium-ion batteries based on dynamic optimization techniques. The major outcomes of this thesis will be, (i) comparison of different types of modeling efforts that will help predict and understand capacity fade in lithium-ion batteries that will help design better batteries for the future, (ii) a methodology for the optimal design of next-generation porous electrodes for lithium-ion batteries, with spatially graded porosity distributions with improved energy efficiency and battery lifetime and (iii) optimized operating conditions of batteries for high energy and utilization efficiency, safer operation
NASA Astrophysics Data System (ADS)
Chen, Duan; Leon, Arturo S.; Gibson, Nathan L.; Hosseini, Parnian
2016-01-01
Optimizing the operation of a multireservoir system is challenging due to the high dimension of the decision variables that lead to a large and complex search space. A spectral optimization model (SOM), which transforms the decision variables from time domain to frequency domain, is proposed to reduce the dimensionality. The SOM couples a spectral dimensionality-reduction method called Karhunen-Loeve (KL) expansion within the routine of Nondominated Sorting Genetic Algorithm (NSGA-II). The KL expansion is used to represent the decision variables as a series of terms that are deterministic orthogonal functions with undetermined coefficients. The KL expansion can be truncated into fewer significant terms, and consequently, fewer coefficients by a predetermined number. During optimization, operators of the NSGA-II (e.g., crossover) are conducted only on the coefficients of the KL expansion rather than the large number of decision variables, significantly reducing the search space. The SOM is applied to the short-term operation of a 10-reservoir system in the Columbia River of the United States. Two scenarios are considered herein, the first with 140 decision variables and the second with 3360 decision variables. The hypervolume index is used to evaluate the optimization performance in terms of convergence and diversity. The evaluation of optimization performance is conducted for both conventional optimization model (i.e., NSGA-II without KL) and the SOM with different number of KL terms. The results show that the number of decision variables can be greatly reduced in the SOM to achieve a similar or better performance compared to the conventional optimization model. For the scenario with 140 decision variables, the optimal performance of the SOM model is found with six KL terms. For the scenario with 3360 decision variables, the optimal performance of the SOM model is obtained with 11 KL terms.
Reduced-Order Model for Dynamic Optimization of Pressure Swing Adsorption
Agarwal, Anshul; Biegler, L.T.; Zitney, S.E.
2007-11-01
The last few decades have seen a considerable increase in the applications of adsorptive gas separation technologies, such as pressure swing adsorption (PSA). From an economic and environmental point of view, hydrogen separation and carbon dioxide capture from flue gas streams are the most promising applications of PSA. With extensive industrial applications, there is a significant interest for an efficient modeling, simulation, and optimization strategy. However, the design and optimization of the PSA processes have largely remained an experimental effort because of the complex nature of the mathematical models describing practical PSA processes. The separation processes are based on solid-gas equilibrium and operate under periodic transient conditions. Models for PSA processes are therefore multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together and high nonlinearities arising from non-isothermal effects. The computational effort required to solve such systems is usually quite expensive and prohibitively time consuming. Besides this, stringent product specifications, required by many industrial processes, often lead to convergence failures of the optimizers. The solution of this coupled stiff PDE system is governed by steep concentrations and temperature fronts moving with time. As a result, the optimization of such systems for either design or operation represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Sophisticated optimization strategies have been developed and applied to PSA systems with significant improvement in the performance of the process. However, most of these approaches have been quite time consuming. This gives a strong motivation to develop cost-efficient and robust optimization strategies for PSA processes. Moreover, in case of flowsheet
Optimized Finite-Difference Coefficients for Hydroacoustic Modeling
NASA Astrophysics Data System (ADS)
Preston, L. A.
2014-12-01
Responsible utilization of marine renewable energy sources through the use of current energy converter (CEC) and wave energy converter (WEC) devices requires an understanding of the noise generation and propagation from these systems in the marine environment. Acoustic noise produced by rotating turbines, for example, could adversely affect marine animals and human-related marine activities if not properly understood and mitigated. We are utilizing a 3-D finite-difference acoustic simulation code developed at Sandia that can accurately propagate noise in the complex bathymetry in the near-shore to open ocean environment. As part of our efforts to improve computation efficiency in the large, high-resolution domains required in this project, we investigate the effects of using optimized finite-difference coefficients on the accuracy of the simulations. We compare accuracy and runtime of various finite-difference coefficients optimized via criteria such as maximum numerical phase speed error, maximum numerical group speed error, and L-1 and L-2 norms of weighted numerical group and phase speed errors over a given spectral bandwidth. We find that those coefficients optimized for L-1 and L-2 norms are superior in accuracy to those based on maximal error and can produce runtimes of 10% of the baseline case, which uses Taylor Series finite-difference coefficients at the Courant time step limit. We will present comparisons of the results for the various cases evaluated as well as recommendations for utilization of the cases studied. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Lauzeral, Christine; Grenouillet, Gaël; Brosse, Sébastien
2012-01-01
Species distribution models (SDMs) are widespread in ecology and conservation biology, but their accuracy can be lowered by non-environmental (noisy) absences that are common in species occurrence data. Here we propose an iterative ensemble modelling (IEM) method to deal with noisy absences and hence improve the predictive reliability of ensemble modelling of species distributions. In the IEM approach, outputs of a classical ensemble model (EM) were used to update the raw occurrence data. The revised data was then used as input for a new EM run. This process was iterated until the predictions stabilized. The outputs of the iterative method were compared to those of the classical EM using virtual species. The IEM process tended to converge rapidly. It increased the consensus between predictions provided by the different methods as well as between those provided by different learning data sets. Comparing IEM and EM showed that for high levels of non-environmental absences, iterations significantly increased prediction reliability measured by the Kappa and TSS indices, as well as the percentage of well-predicted sites. Compared to EM, IEM also reduced biases in estimates of species prevalence. Compared to the classical EM method, IEM improves the reliability of species predictions. It particularly deals with noisy absences that are replaced in the data matrices by simulated presences during the iterative modelling process. IEM thus constitutes a promising way to increase the accuracy of EM predictions of difficult-to-detect species, as well as of species that are not in equilibrium with their environment. PMID:23166691
Ghiasi, Mohammad Sadegh; Arjmand, Navid; Boroushaki, Mehrdad; Farahmand, Farzam
2016-03-01
A six-degree-of-freedom musculoskeletal model of the lumbar spine was developed to predict the activity of trunk muscles during light, moderate and heavy lifting tasks in standing posture. The model was formulated into a multi-objective optimization problem, minimizing the sum of the cubed muscle stresses and maximizing the spinal stability index. Two intelligent optimization algorithms, i.e., the vector evaluated particle swarm optimization (VEPSO) and nondominated sorting genetic algorithm (NSGA), were employed to solve the optimization problem. The optimal solution for each task was then found in the way that the corresponding in vivo intradiscal pressure could be reproduced. Results indicated that both algorithms predicted co-activity in the antagonistic abdominal muscles, as well as an increase in the stability index when going from the light to the heavy task. For all of the light, moderate and heavy tasks, the muscles' activities predictions of the VEPSO and the NSGA were generally consistent and in the same order of the in vivo electromyography data. The proposed methodology is thought to provide improved estimations for muscle activities by considering the spinal stability and incorporating the in vivo intradiscal pressure data. PMID:26088358
NASA Astrophysics Data System (ADS)
Shang, Linyuan; Zhao, Guozhong
2016-06-01
This article investigates topology optimization of a bi-material model for acoustic-structural coupled systems. The design variables are volume fractions of inclusion material in a bi-material model constructed by the microstructure-based design domain method (MDDM). The design objective is the minimization of sound pressure level (SPL) in an interior acoustic medium. Sensitivities of SPL with respect to topological design variables are derived concretely by the adjoint method. A relaxed form of optimality criteria (OC) is developed for solving the acoustic-structural coupled optimization problem to find the optimum bi-material distribution. Based on OC and the adjoint method, a topology optimization method to deal with large calculations in acoustic-structural coupled problems is proposed. Numerical examples are given to illustrate the applications of topology optimization for a bi-material plate under a low single-frequency excitation and an aerospace structure under a low frequency-band excitation, and to prove the efficiency of the adjoint method and the relaxed form of OC.
Optimization of a Two-Fluid Hydrodynamic Model of Churn-Turbulent Flow
Donna Post Guillen
2009-07-01
A hydrodynamic model of two-phase, churn-turbulent flows is being developed using the computational multiphase fluid dynamics (CMFD) code, NPHASE-CMFD. The numerical solutions obtained by this model are compared with experimental data obtained at the TOPFLOW facility of the Institute of Safety Research at the Forschungszentrum Dresden-Rossendorf. The TOPFLOW data is a high quality experimental database of upward, co-current air-water flows in a vertical pipe suitable for validation of computational fluid dynamics (CFD) codes. A five-field CMFD model was developed for the continuous liquid phase and four bubble size groups using mechanistic closure models for the ensemble-averaged Navier-Stokes equations. Mechanistic models for the drag and non-drag interfacial forces are implemented to include the governing physics to describe the hydrodynamic forces controlling the gas distribution. The closure models provide the functional form of the interfacial forces, with user defined coefficients to adjust the force magnitude. An optimization strategy was devised for these coefficients using commercial design optimization software. This paper demonstrates an approach to optimizing CMFD model parameters using a design optimization approach. Computed radial void fraction profiles predicted by the NPHASE-CMFD code are compared to experimental data for four bubble size groups.
Geometry optimization method versus predictive ability in QSPR modeling for ionic liquids.
Rybinska, Anna; Sosnowska, Anita; Barycki, Maciej; Puzyn, Tomasz
2016-02-01
Computational techniques, such as Quantitative Structure-Property Relationship (QSPR) modeling, are very useful in predicting physicochemical properties of various chemicals. Building QSPR models requires calculating molecular descriptors and the proper choice of the geometry optimization method, which will be dedicated to specific structure of tested compounds. Herein, we examine the influence of the ionic liquids' (ILs) geometry optimization methods on the predictive ability of QSPR models by comparing three models. The models were developed based on the same experimental data on density collected for 66 ionic liquids, but with employing molecular descriptors calculated from molecular geometries optimized at three different levels of the theory, namely: (1) semi-empirical (PM7), (2) ab initio (HF/6-311+G*) and (3) density functional theory (B3LYP/6-311+G*). The model in which the descriptors were calculated by using ab initio HF/6-311+G* method indicated the best predictivity capabilities ([Formula: see text] = 0.87). However, PM7-based model has comparable values of quality parameters ([Formula: see text] = 0.84). Obtained results indicate that semi-empirical methods (faster and less expensive regarding CPU time) can be successfully employed to geometry optimization in QSPR studies for ionic liquids. PMID:26830600
NASA Astrophysics Data System (ADS)
Kourakos, George; Mantoglou, Aristotelis
2013-02-01
SummaryThe demand for fresh water in coastal areas and islands can be very high due to increased local needs and tourism. A multi-objective optimization methodology is developed, involving minimization of economic and environmental costs while satisfying water demand. The methodology considers desalinization of pumped water and injection of treated water into the aquifer. Variable density aquifer models are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi-objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNNs)]. The surrogate models are trained adaptively during optimization based on a genetic algorithm. In the crossover step, each pair of parents generates a pool of offspring which are evaluated using the fast surrogate model. Then, the most promising offspring are evaluated using the exact numerical model. This procedure eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. The method has important advancements compared to previous methods such as precise evaluation of the Pareto set and alleviation of propagation of errors due to surrogate model approximations. The method is applied to an aquifer in the Greek island of Santorini. The results show that the new MOSA(MNN) algorithm offers significant reduction in computational time compared to previous methods (in the case study it requires only 5% of the time required by other methods). Further, the Pareto solution is better than the solution obtained by alternative algorithms.
Modeling Network Intrusion Detection System Using Feature Selection and Parameters Optimization
NASA Astrophysics Data System (ADS)
Kim, Dong Seong; Park, Jong Sou
Previous approaches for modeling Intrusion Detection System (IDS) have been on twofold: improving detection model(s) in terms of (i) feature selection of audit data through wrapper and filter methods and (ii) parameters optimization of detection model design, based on classification, clustering algorithms, etc. In this paper, we present three approaches to model IDS in the context of feature selection and parameters optimization: First, we present Fusion of Genetic Algorithm (GA) and Support Vector Machines (SVM) (FuGAS), which employs combinations of GA and SVM through genetic operation and it is capable of building an optimal detection model with only selected important features and optimal parameters value. Second, we present Correlation-based Hybrid Feature Selection (CoHyFS), which utilizes a filter method in conjunction of GA for feature selection in order to reduce long training time. Third, we present Simultaneous Intrinsic Model Identification (SIMI), which adopts Random Forest (RF) and shows better intrusion detection rates and feature selection results, along with no additional computational overheads. We show the experimental results and analysis of three approaches on KDD 1999 intrusion detection datasets.
An Optimization Model for Plug-In Hybrid Electric Vehicles
Malikopoulos, Andreas; Smith, David E
2011-01-01
The necessity for environmentally conscious vehicle designs in conjunction with increasing concerns regarding U.S. dependency on foreign oil and climate change have induced significant investment towards enhancing the propulsion portfolio with new technologies. More recently, plug-in hybrid electric vehicles (PHEVs) have held great intuitive appeal and have attracted considerable attention. PHEVs have the potential to reduce petroleum consumption and greenhouse gas (GHG) emissions in the commercial transportation sector. They are especially appealing in situations where daily commuting is within a small amount of miles with excessive stop-and-go driving. The research effort outlined in this paper aims to investigate the implications of motor/generator and battery size on fuel economy and GHG emissions in a medium-duty PHEV. An optimization framework is developed and applied to two different parallel powertrain configurations, e.g., pre-transmission and post-transmission, to derive the optimal design with respect to motor/generator and battery size. A comparison between the conventional and PHEV configurations with equivalent size and performance under the same driving conditions is conducted, thus allowing an assessment of the fuel economy and GHG emissions potential improvement. The post-transmission parallel configuration yields higher fuel economy and less GHG emissions compared to pre-transmission configuration partly attributable to the enhanced regenerative braking efficiency.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This
2010-01-01
Background The success of molecular systems biology hinges on the ability to use computational models to design predictive experiments, and ultimately unravel underlying biological mechanisms. A problem commonly encountered in the computational modelling of biological networks is that alternative, structurally different models of similar complexity fit a set of experimental data equally well. In this case, more than one molecular mechanism can explain available data. In order to rule out the incorrect mechanisms, one needs to invalidate incorrect models. At this point, new experiments maximizing the difference between the measured values of alternative models should be proposed and conducted. Such experiments should be optimally designed to produce data that are most likely to invalidate incorrect model structures. Results In this paper we develop methodologies for the optimal design of experiments with the aim of discriminating between different mathematical models of the same biological system. The first approach determines the 'best' initial condition that maximizes the L2 (energy) distance between the outputs of the rival models. In the second approach, we maximize the L2-distance of the outputs by designing the optimal external stimulus (input) profile of unit L2-norm. Our third method uses optimized structural changes (corresponding, for example, to parameter value changes reflecting gene knock-outs) to achieve the same goal. The numerical implementation of each method is considered in an example, signal processing in starving Dictyostelium amœbæ. Conclusions Model-based design of experiments improves both the reliability and the efficiency of biochemical network model discrimination. This opens the way to model invalidation, which can be used to perfect our understanding of biochemical networks. Our general problem formulation together with the three proposed experiment design methods give the practitioner new tools for a systems biology approach to
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This
Selection of optimal complexity for ENSO-EMR model by minimum description length principle
NASA Astrophysics Data System (ADS)
Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.
2012-12-01
One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression
Optimal observation network design for conceptual model discrimination and uncertainty reduction
NASA Astrophysics Data System (ADS)
Pham, Hai V.; Tsai, Frank T.-C.
2016-02-01
This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.
Modeling for deformable mirrors and the adaptive optics optimization program
Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.
1997-03-18
We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.
Optimal reconstruction value for DCT dequantization using Laplacian pdf model
NASA Astrophysics Data System (ADS)
Kang, So-Yeon; Lee, Byung-Uk
2004-01-01
Many image compression standards such as JPEG, MPEG or H.263 are based on the discrete cosine transform (DCT), quantization, and Huffman coding. Quantization error is the major source of image quality degradation. The current dequantization method assumes the uniform distribution of DCT coefficients. Therefore the reconstruction value is the center of each quantization interval. However DCT coefficients are regarded to follow Laplacian probability density function (pdf). We derive an optimal reconstruction value in closed form assuming Laplacian pdf, and show the effect of the correction on image quality. We estimate the Laplacian pdf parameter for each DCT coefficient, and obtain a correction for reconstruction value from the proposed theoretical predictions. The corrected value depends on the Laplacian pdf parameter and the quantization step size Q. The effect of PSNR improvement due to the change in dequantization value is about 0.2 ~ 0.4 dB. We also analyze the reason for the limited improvements.
Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations
NASA Astrophysics Data System (ADS)
Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.
2011-12-01
HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of
Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes
Felice, Maria V.; Velichko, Alexander Wilcox, Paul D.; Barden, Tim; Dunhill, Tony
2015-03-31
Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.
Cancer risk assessment: Optimizing human health through linear dose-response models.
Calabrese, Edward J; Shamoun, Dima Yazji; Hanekamp, Jaap C
2015-07-01
This paper proposes that generic cancer risk assessments be based on the integration of the Linear Non-Threshold (LNT) and hormetic dose-responses since optimal hormetic beneficial responses are estimated to occur at the dose associated with a 10(-4) risk level based on the use of a LNT model as applied to animal cancer studies. The adoption of the 10(-4) risk estimate provides a theoretical and practical integration of two competing risk assessment models whose predictions cannot be validated in human population studies or with standard chronic animal bioassay data. This model-integration reveals both substantial protection of the population from cancer effects (i.e. functional utility of the LNT model) while offering the possibility of significant reductions in cancer incidence should the hormetic dose-response model predictions be correct. The dose yielding the 10(-4) cancer risk therefore yields the optimized toxicologically based "regulatory sweet spot". PMID:25916915
On meeting capital requirements with a chance-constrained optimization model.
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets. PMID:27186464
Using ILOG OPL-CPLEX and ILOG Optimization Decision Manager (ODM) to Develop Better Models
NASA Astrophysics Data System (ADS)
2008-10-01
This session will provide an in-depth overview on building state-of-the-art decision support applications and models. You will learn how to harness the full power of the ILOG OPL-CPLEX-ODM Development System (ODMS) to develop optimization models and decision support applications that solve complex problems ranging from near real-time scheduling to long-term strategic planning. We will demonstrate how to use ILOG's Open Programming Language (OPL) to quickly model problems solved by ILOG CPLEX, and how to use ILOG ODM to gain further insight about the model. By the end of the session, attendees will understand how to take advantage of the powerful combination of ILOG OPL (to describe an optimization model) and ILOG ODM (to understand the relationships between data, decision variables and constraints).
Surrogate modelling and optimization using shape-preserving response prediction: A review
NASA Astrophysics Data System (ADS)
Leifsson, Leifur; Koziel, Slawomir
2016-03-01
Computer simulation models are ubiquitous in modern engineering design. In many cases, they are the only way to evaluate a given design with sufficient fidelity. Unfortunately, an added computational expense is associated with higher fidelity models. Moreover, the systems being considered are often highly nonlinear and may feature a large number of designable parameters. Therefore, it may be impractical to solve the design problem with conventional optimization algorithms. A promising approach to alleviate these difficulties is surrogate-based optimization (SBO). Among proven SBO techniques, the methods utilizing surrogates constructed from corrected physics-based low-fidelity models are, in many cases, the most efficient. This article reviews a particular technique of this type, namely, shape-preserving response prediction (SPRP), which works on the level of the model responses to correct the underlying low-fidelity models. The formulation and limitations of SPRP are discussed. Applications to several engineering design problems are provided.
On models of design optimization with bounded-but-unknown uncertainty
NASA Astrophysics Data System (ADS)
Yi, Ping
2012-09-01
When the amount of information available on uncertain parameters is not enough to accurately define the probability distribution functions and only bounds of the uncertain parameters are available, non-probabilistic reliability are recently used. Interval variables and convex model are usually used to quantify the bounded-but-unknown uncertainty and the corresponding models of non-probabilistic reliability measure and design optimization are brought forward. Furthermore, probabilistic reliability theory can also be utilized by assuming the bounded-but-unknown variables as uniform random variables based on the principle of maximum entropy. In this paper, these three models of design optimization with bounded-but-unknown uncertainty are discussed and compared. It is pointed out that non-probabilistic interval model is too conservative and the probabilistic model is a rational alternative.
A Model Optimization Approach to the Automatic Segmentation of Medical Images
NASA Astrophysics Data System (ADS)
Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi; Miyake, Yoichi
The aim of this work is to develop an efficient medical image segmentation technique by fitting a nonlinear shape model with pre-segmented images. In this technique, the kernel principle component analysis (KPCA) is used to capture the shape variations and to build the nonlinear shape model. The pre-segmentation is carried out by classifying the image pixels according to the high level texture features extracted using the over-complete wavelet packet decomposition. Additionally, the model fitting is completed using the particle swarm optimization technique (PSO) to adapt the model parameters. The proposed technique is fully automated, is talented to deal with complex shape variations, can efficiently optimize the model to fit the new cases, and is robust to noise and occlusion. In this paper, we demonstrate the proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans and the obtained results are very hopeful.
Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes
NASA Astrophysics Data System (ADS)
Felice, Maria V.; Velichko, Alexander; Wilcox, Paul D.; Barden, Tim; Dunhill, Tony
2015-03-01
Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.
Zhang, Dezhi; Li, Shuangyan
2014-01-01
This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209
Stochastic modelling of turbulent combustion for design optimization of gas turbine combustors
NASA Astrophysics Data System (ADS)
Mehanna Ismail, Mohammed Ali
The present work covers the development and the implementation of an efficient algorithm for the design optimization of gas turbine combustors. The purpose is to explore the possibilities and indicate constructive suggestions for optimization techniques as alternative methods for designing gas turbine combustors. The algorithm is general to the extent that no constraints are imposed on the combustion phenomena or on the combustor configuration. The optimization problem is broken down into two elementary problems: the first is the optimum search algorithm, and the second is the turbulent combustion model used to determine the combustor performance parameters. These performance parameters constitute the objective and physical constraints in the optimization problem formulation. The examination of both turbulent combustion phenomena and the gas turbine design process suggests that the turbulent combustion model represents a crucial part of the optimization algorithm. The basic requirements needed for a turbulent combustion model to be successfully used in a practical optimization algorithm are discussed. In principle, the combustion model should comply with the conflicting requirements of high fidelity, robustness and computational efficiency. To that end, the problem of turbulent combustion is discussed and the current state of the art of turbulent combustion modelling is reviewed. According to this review, turbulent combustion models based on the composition PDF transport equation are found to be good candidates for application in the present context. However, these models are computationally expensive. To overcome this difficulty, two different models based on the composition PDF transport equation were developed: an improved Lagrangian Monte Carlo composition PDF algorithm and the generalized stochastic reactor model. Improvements in the Lagrangian Monte Carlo composition PDF model performance and its computational efficiency were achieved through the
NASA Astrophysics Data System (ADS)
Sanborn, C. J.; Fitzpatrick, M.; Cormier, V. F.
2012-12-01
The differences between earthquakes and explosions are largest in the highest recordable frequency band. In this band, scattering of elastic energy by small-scale heterogeneity (less than a wavelength) can equilibrate energy on components of motion and stabilize the behavior of the Lg wave trapped in the Earth's crust. Larger scale structure (greater than a wavelength) can still assume major control over the efficiency or blockage of the Lg and other regional/local seismic waves. We seek to model the combined effects of the large-scale (deterministic) and the small scale (statistical) structure to invert for improved structural models and to evaluate the performance of yield estimators and discriminants at selected IMS monitoring stations in Eurasia. To that end we have modified a 3-D ray tracing code for calculating ray trajectory1 in large-scale deterministic structure by adding new code to calculate mean free path, scattering angle, polarization, and amplitude required by radiative transport theory for the effects of small-scale statistical structure.2 This poster explores the methods of radiative transport for both deterministic and statistical structure, with particular attention given to the scattering model, and presents preliminary synthetic seismograms generated by the code both with and without the effects of statistical scattering. References: (1) Menke, W., www.iris.edu/software/downloads/plotting/. (2) Shearer, P. M., and P.S. Earle, in Advances in Geophysics, Volume 50: Earth Heterogeneity and Scattering Effects on Seismic Waves, H. Sato and M.C. Fehler (ed.), 2008.
Modeling and optimization of a multi-product biosynthesis factory for multiple objectives.
Lee, Fook Choon; Pandu Rangaiah, Gade; Lee, Dong-Yup
2010-05-01
Genetic algorithms and optimization in general, enable us to probe deeper into the metabolic pathway recipe for multi-product biosynthesis. An augmented model for optimizing serine and tryptophan flux ratios simultaneously in Escherichia coli, was developed by linking the dynamic tryptophan operon model and aromatic amino acid-tryptophan biosynthesis pathways to the central carbon metabolism model. Six new kinetic parameters of the augmented model were estimated with considerations of available experimental data and other published works. Major differences between calculated and reference concentrations and fluxes were explained. Sensitivities and underlying competition among fluxes for carbon sources were consistent with intuitive expectations based on metabolic network and previous results. Biosynthesis rates of serine and tryptophan were simultaneously maximized using the augmented model via concurrent gene knockout and manipulation. The optimization results were obtained using the elitist non-dominant sorting genetic algorithm (NSGA-II) supported by pattern recognition heuristics. A range of Pareto-optimal enzyme activities regulating the amino acids biosynthesis was successfully obtained and elucidated wherever possible vis-à-vis fermentation work based on recombinant DNA technology. The predicted potential improvements in various metabolic pathway recipes using the multi-objective optimization strategy were highlighted and discussed in detail. PMID:20051269
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao
2014-09-01
Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP. PMID:24613939
Factor analysis models via I-divergence optimization.
Finesso, Lorenzo; Spreij, Peter
2016-09-01
Given a positive definite covariance matrix [Formula: see text] of dimension n, we approximate it with a covariance of the form [Formula: see text], where H has a prescribed number [Formula: see text] of columns and [Formula: see text] is diagonal. The quality of the approximation is gauged by the I-divergence between the zero mean normal laws with covariances [Formula: see text] and [Formula: see text], respectively. To determine a pair (H, D) that minimizes the I-divergence we construct, by lifting the minimization into a larger space, an iterative alternating minimization algorithm (AML) à la Csiszár-Tusnády. As it turns out, the proper choice of the enlarged space is crucial for optimization. The convergence of the algorithm is studied, with special attention given to the case where D is singular. The theoretical properties of the AML are compared to those of the popular EM algorithm for exploratory factor analysis. Inspired by the ECME (a Newton-Raphson variation on EM), we develop a similar variant of AML, called ACML, and in a few numerical experiments, we compare the performances of the four algorithms. PMID:26608962
Recent developments in DYNSUB: New models, code optimization and parallelization
Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.
2013-07-01
DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)
Basafa, Ehsan; Armand, Mehran
2014-01-01
A potential effective treatment for prevention of osteoporotic hip fractures is augmentation of the mechanical properties of the femur by injecting it with agents such as (PMMA) bone cement – femoroplasty. The operation, however, is only in research stage and can benefit substantially from computer planning and optimization. We report the results of computational planning and optimization of the procedure for biomechanical evaluation. An evolutionary optimization method was used to optimally place the cement in finite element (FE) models of seven osteoporotic bone specimens. The optimization, with some inter-specimen variations, suggested that areas close to the cortex in the superior and inferior of the neck and supero-lateral aspect of the greater trochanter will benefit from augmentation. We then used a particle-based model for bone cement diffusion simulation to match the optimized pattern, taking into account the limitations of the actual surgery, including limited volume of injection to prevent thermal necrosis. Simulations showed that the yield load can be significantly increased by more than 30%, using only 9ml of bone cement. This increase is comparable to previous literature reports where gross filling of the bone was employed instead, using more than 40ml of cement. These findings, along with the differences in the optimized plans between specimens, emphasize the need for subject-specific models for effective planning of femoral augmentation. PMID:24856887
Parameter identification of a distributed runoff model by the optimization software Colleo
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Anai, Hirokazu; Iwami, Yoichi
2015-04-01
The introduction of Colleo (Collection of Optimization software) is presented and case studies of parameter identification for a distributed runoff model are illustrated. In order to calculate discharge of rivers accurately, a distributed runoff model becomes widely used to take into account various land usage, soil-type and rainfall distribution. Feasibility study of parameter optimization is desired to be done in two steps. The first step is to survey which optimization algorithms are suitable for the problems of interests. The second step is to investigate the performance of the specific optimization algorithm. Most of the previous studies seem to focus on the second step. This study will focus on the first step and complement the previous studies. Many optimization algorithms have been proposed in the computational science field and a large number of optimization software have been developed and opened to the public with practically applicable performance and quality. It is well known that it is important to use suitable algorithms for the problems to obtain good optimization results efficiently. In order to achieve algorithm comparison readily, optimization software is needed with which performance of many algorithms can be compared and can be connected to various simulation software. Colleo is developed to satisfy such needs. Colleo provides a unified user interface to several optimization software such as pyOpt, NLopt, inspyred and R and helps investigate the suitability of optimization algorithms. 74 different implementations of optimization algorithms, Nelder-Mead, Particle Swarm Optimization and Genetic Algorithm, are available with Colleo. The effectiveness of Colleo was demonstrated with the cases of flood events of the Gokase River basin in Japan (1820km2). From 2002 to 2010, there were 15 flood events, in which the discharge exceeded 1000m3/s. The discharge was calculated with the PWRI distributed hydrological model developed by ICHARM. The target
Oyster Creek cycle 10 nodal model parameter optimization study using PSMS
Dougher, J.D.
1987-01-01
The power shape monitoring system (PSMS) is an on-line core monitoring system that uses a three-dimensional nodal code (NODE-B) to perform nodal power calculations and compute thermal margins. The PSMS contains a parameter optimization function that improves the ability of NODE-B to accurately monitor core power distributions. This functions iterates on the model normalization parameters (albedos and mixing factors) to obtain the best agreement between predicted and measured traversing in-core probe (TIP) reading on a statepoint-by-statepoint basis. Following several statepoint optimization runs, an average set of optimized normalization parameters can be determined and can be implemented into the current or subsequent cycle core model for on-line core monitoring. A statistical analysis of 19 high-power steady-state state-points throughout Oyster Creek cycle 10 operation has shown a consistently poor virgin model performance. The normalization parameters used in the cycle 10 NODE-B model were based on a cycle 8 study, which evaluated only Exxon fuel types. The introduction of General Electric (GE) fuel into cycle 10 (172 assemblies) was a significant fuel/core design change that could have altered the optimum set of normalization parameters. Based on the need to evaluate a potential change in the model normalization parameters for cycle 11 and in an attempt to account for the poor cycle 10 model performance, a parameter optimization study was performed.
Modified optimal control pilot model for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Davidson, John B.; Schmidt, David K.
1992-01-01
This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.
NASA Technical Reports Server (NTRS)
Loos, Alfred C.; Weideman, Mark H.; Kranbuehl, David E.; Long, Edward R., Jr.
1991-01-01
Process simulation models and cure monitoring sensors are discussed for use in optimal processing of fiber-reinforced composites. Analytical models relate the specified temperature and pressure cure cycle to the thermal, chemical, and physical processes occurring in the composite during consolidation and cure. Frequency-dependent electromagnetic sensing (FDEMS) is described as an in situ sensor for monitoring the composite curing process and for verification of process simulation models. A model for resin transfer molding of textile composites is used to illustrate the predictive capabilities of a process simulation model. The model is used to calculate the resin infiltration time, fiber volume fraction, resin viscosity, and resin degree of cure. Results of the model are compared with in situ FDEMS measurements.
Stygar, A H; Kristensen, A R; Makulska, J
2014-08-01
The aim of this study was to provide farmers an efficient tool for supporting optimal decisions in the beef heifer rearing process. The complexity of beef heifer management prompted the development of a model including decisions on the feeding level during prepuberty (age <10 mo), the time of weaning (age, BW, calendar month), the feeding level during the reproductive period (age ≥10 mo), and time of breeding (age, BW, and calendar month). The model was formulated as 3-level hierarchic Markov process. A founder level of the model has 12 states resembling all possible birth months of a heifer. Based on the birth month information from the founder level, for the indoor season (November to April) and outdoor season (May to October), feeding and breeding costs (natural service cost in the outdoor and AI cost in the indoor season) were applied. The optimal rearing strategy was found by maximizing the total discounted net revenues from the predicted future productivity of the Polish Limousine heifers defined as the cumulative BW of calves born from a cow calved until the age of 5 yr, standardized on the 210th day of age. According to the modeled optimal policy, heifers fed during the whole rearing period at the ADG of 810 g/d and generally weaned after the maximum suckling period of 9 mo should already be bred at the age of 13.2 mo and BW constituting 55.6% of the average mature BW. Based on the optimal strategy, 52% of all heifers conceived from May to July and calved from February to April. This optimal rearing pattern resulted in an average net return of EUR 311.6 per pregnant heifer. It was found that the economic efficiency of beef operations can be improved by applying different herd management practices to those currently used in Poland. Breeding at 55.6% of the average mature BW, after a shorter and less expensive rearing period, resulted in an increase in the average net return per heifer by almost 18% compared to the conventional system, in which heifers were
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Hiraishi, Kunihiko
2014-01-01
One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766
Three-dimensional magnetic optimization of accelerator magnets using an analytic strip model
Rochepault, Etienne Aubert, Guy; Vedrine, Pierre
2014-07-14
The end design is a critical step in the design of superconducting accelerator magnets. First, the strain energy of the conductors must be minimized, which can be achieved using differential geometry. The end design also requires an optimization of the magnetic field homogeneity. A mechanical and magnetic model for the conductors, using developable strips, is described in this paper. This model can be applied to superconducting Rutherford cables, and it is particularly suitable for High Temperature Superconducting tapes. The great advantage of this approach is analytic simplifications in the field computation, allowing for very fast and accurate computations, which save a considerable computational time during the optimization process. Some 3D designs for dipoles are finally proposed, and it is shown that the harmonic integrals can be easily optimized using this model.
Three-dimensional magnetic optimization of accelerator magnets using an analytic strip model
NASA Astrophysics Data System (ADS)
Rochepault, Etienne; Aubert, Guy; Vedrine, Pierre
2014-07-01
The end design is a critical step in the design of superconducting accelerator magnets. First, the strain energy of the conductors must be minimized, which can be achieved using differential geometry. The end design also requires an optimization of the magnetic field homogeneity. A mechanical and magnetic model for the conductors, using developable strips, is described in this paper. This model can be applied to superconducting Rutherford cables, and it is particularly suitable for High Temperature Superconducting tapes. The great advantage of this approach is analytic simplifications in the field computation, allowing for very fast and accurate computations, which save a considerable computational time during the optimization process. Some 3D designs for dipoles are finally proposed, and it is shown that the harmonic integrals can be easily optimized using this model.
Optimal vaccination in a stochastic epidemic model of two non-interacting populations.
Yuan, Edwin C; Alderson, David L; Stromberg, Sean; Carlson, Jean M
2015-01-01
Developing robust, quantitative methods to optimize resource allocations in response to epidemics has the potential to save lives and minimize health care costs. In this paper, we develop and apply a computationally efficient algorithm that enables us to calculate the complete probability distribution for the final epidemic size in a stochastic Susceptible-Infected-Recovered (SIR) model. Based on these results, we determine the optimal allocations of a limited quantity of vaccine between two non-interacting populations. We compare the stochastic solution to results obtained for the traditional, deterministic SIR model. For intermediate quantities of vaccine, the deterministic model is a poor estimate of the optimal strategy for the more realistic, stochastic case. PMID:25688857
Inverse hydrograph routing optimization model based on the kinematic wave approach
NASA Astrophysics Data System (ADS)
Saghafian, B.; Jannaty, M. H.; Ezami, N.
2015-08-01
This article presents and validates the inverse flood hydrograph routing optimization model under kinematic wave (KW) approximation in order to produce the upstream (inflow) hydrograph, given the downstream (outflow) hydrograph of a river reach. The cost function involves minimization of the error between the observed outflow hydrograph and the corresponding directly routed outflow hydrograph. Decision variables are the inflow hydrograph ordinates. The KW and genetic algorithm (GA) are coupled, representing the selected methods of direct routing and optimization, respectively. A local search technique is also enforced to achieve better agreement of the routed outflow hydrograph with the observed hydrograph. Computer programs handling the direct flood routing, cost function and local search are linked with the optimization model. The results show that the case study inflow hydrographs obtained by the GA were reconstructed with accuracy. It was also concluded that the coupled KW-GA model framework can perform inverse hydrograph routing with numerical stability.