Modeling using optimization routines
NASA Technical Reports Server (NTRS)
Thomas, Theodore
1995-01-01
Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.
HOMER® Micropower Optimization Model
Lilienthal, P.
2005-01-01
NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.
Optimization in Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Marsden, Alison L.
2014-01-01
Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.
NEMO Oceanic Model Optimization
NASA Astrophysics Data System (ADS)
Epicoco, I.; Mocavero, S.; Murli, A.; Aloisio, G.
2012-04-01
NEMO is an oceanic model used by the climate community for stand-alone or coupled experiments. Its parallel implementation, based on MPI, limits the exploitation of the emerging computational infrastructures at peta and exascale, due to the weight of communications. As case study we considered the MFS configuration developed at INGV with a resolution of 1/16° tailored on the Mediterranenan Basin. The work is focused on the analysis of the code on the MareNostrum cluster and on the optimization of critical routines. The first performance analysis of the model aimed at establishing how much the computational performance are influenced by the GPFS file system or the local disks and wich is the best domain decomposition. The results highlight that the exploitation of local disks can reduce the wall clock time up to 40% and that the best performance is achieved with a 2D decomposition when the local domain has a square shape. A deeper performance analysis highlights the obc_rad, dyn_spg and tra_adv routines are the most time consuming routines. The obc_rad implements the evaluation of the open boundaries and it has been the first routine to be optimized. The communication pattern implemented in obc_rad routine has been redesigned. Before the introduction of the optimizations all processes were involved in the communication, but only the processes on the boundaries have the actual data to be exchanged and only the data on the boundaries must be exchanged. Moreover the data along the vertical levels are "packed" and sent with only one MPI_send invocation. The overall efficiency increases compared with the original version, as well as the parallel speed-up. The execution time was reduced of about 33.81%. The second phase of optimization involved the SOR solver routine, implementing the Red-Black Successive-Over-Relaxation method. The high frequency of exchanging data among processes represent the most part of the overall communication time. The number of communication is
Pyomo : Python Optimization Modeling Objects.
Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul
2010-11-01
The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.
Risk modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-09-01
Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.
Optimal designs for copula models
Perrone, E.; Müller, W.G.
2016-01-01
Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616
Optimizing a tandem disk model
Healey, J.V.
1983-07-01
A very simple physicomathematical model, in which thin straight blades with zero drag skim across a plane rectangular disk, shows that the maximum power coefficient attains the classical maximum of 0.593 over a range of T and a zero or small negative value of alpha/sub 0/. This maximum appears independent of sigma and there are values of T and alpha/sub 0/ for which the speed through the disk becomes complex and the model breaks down. Extending this model to a tandem disk system leads to a difficulty in defining the power coefficient. Attempts to optimize the system output based on reference areas A/sub 1/, A/sub 2/, and A/sub 4/ prove futile and the sum of the coefficients is chosen for this purpose. For thin blades and zero drag the analytic solution is available and it shows that the maximum value of 2 X 0.593 is attained over a narrow range of slightly negative alpha/sub 0/ (blade nose in) and medium values of T. The maximum is independent of sigma. As T is increased, the model breaks down either after C /SUB psum/ becomes large and negative or after backflow through the downwind disk occurs. There appears to be no requirement on load distribution between the disks. By comparison, modeling a machine with NACA 0012 blades at Re = 1.34 X 10/sup 6/ shows that the maximum value of C /SUB psum/ depends on the solidity. For example, at sigma = 0.4, the maximum value of C /SUB psum/ is 83% of 2 X 0.593. At such high values of sigma, however, the ranges of alpha/sub 0/ and T over which solutions are available become very limited.
Branch strategies - Modeling and optimization
NASA Technical Reports Server (NTRS)
Dubey, Pradeep K.; Flynn, Michael J.
1991-01-01
The authors provide a common platform for modeling different schemes for reducing the branch-delay penalty in pipelined processors as well as evaluating the associated increased instruction bandwidth. Their objective is twofold: to develop a model for different approaches to the branch problem and to help select an optimal strategy after taking into account additional i-traffic generated by branch strategies. The model presented provides a flexible tool for comparing different branch strategies in terms of the reduction it offers in average branch delay and also in terms of the associated cost of wasted instruction fetches. This additional criterion turns out to be a valuable consideration in choosing between two strategies that perform almost equally. More importantly, it provides a better insight into the expected overall system performance. Simple compiler-support-based low-implementation-cost strategies can be very effective under certain conditions. An active branch prediction scheme based on loop buffers can be as competitive as a branch-target-buffer based strategy.
Optimal Appearance Model for Visual Tracking
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639
Optimal Appearance Model for Visual Tracking.
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639
How Optimal Is the Optimization Model?
ERIC Educational Resources Information Center
Heine, Bernd
2013-01-01
Pieter Muysken's article on modeling and interpreting language contact phenomena constitutes an important contribution.The approach chosen is a top-down one, building on the author's extensive knowledge of all matters relating to language contact. The paper aims at integrating a wide range of factors and levels of social, cognitive, and…
Optimization of solver for gas flow modeling
NASA Astrophysics Data System (ADS)
Savichkin, D.; Dodulad, O.; Kloss, Yu
2014-05-01
The main purpose of the work is optimization of the solver for rarefied gas flow modeling based on the Boltzmann equation. Optimization method is based on SIMD extensions for ×86 processors. Computational code is profiled and manually optimized with SSE instructions. Heat flow, shock waves and Knudsen pump are modeled with optimized solver. Dependencies of computational time from mesh sizes and CPU capabilities are provided.
Optimal Decision Making in Neural Inhibition Models
ERIC Educational Resources Information Center
van Ravenzwaaij, Don; van der Maas, Han L. J.; Wagenmakers, Eric-Jan
2012-01-01
In their influential "Psychological Review" article, Bogacz, Brown, Moehlis, Holmes, and Cohen (2006) discussed optimal decision making as accomplished by the drift diffusion model (DDM). The authors showed that neural inhibition models, such as the leaky competing accumulator model (LCA) and the feedforward inhibition model (FFI), can mimic the…
Multiobjective Optimization Of An Extremal Evolution Model
NASA Astrophysics Data System (ADS)
Elettreby, Mohamed Fathey
2005-05-01
We propose a two-dimensional model for a co-evolving ecosystem that generalizes the extremal coupled map lattice model. The model takes into account the concept of multiobjective optimization. We find that the system is self-organized into a critical state. The distribution of avalanche sizes follows a power law.
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
A DSN optimal spacecraft scheduling model
NASA Technical Reports Server (NTRS)
Webb, W. A.
1982-01-01
A computer model is described which uses mixed-integer linear programming to provide optimal DSN spacecraft schedules given a mission set and specified scheduling requirements. A solution technique is proposed which uses Bender's Method and a heuristic starting algorithm.
Evaluation of stochastic reservoir operation optimization models
NASA Astrophysics Data System (ADS)
Celeste, Alcigeimes B.; Billib, Max
2009-09-01
This paper investigates the performance of seven stochastic models used to define optimal reservoir operating policies. The models are based on implicit (ISO) and explicit stochastic optimization (ESO) as well as on the parameterization-simulation-optimization (PSO) approach. The ISO models include multiple regression, two-dimensional surface modeling and a neuro-fuzzy strategy. The ESO model is the well-known and widely used stochastic dynamic programming (SDP) technique. The PSO models comprise a variant of the standard operating policy (SOP), reservoir zoning, and a two-dimensional hedging rule. The models are applied to the operation of a single reservoir damming an intermittent river in northeastern Brazil. The standard operating policy is also included in the comparison and operational results provided by deterministic optimization based on perfect forecasts are used as a benchmark. In general, the ISO and PSO models performed better than SDP and the SOP. In addition, the proposed ISO-based surface modeling procedure and the PSO-based two-dimensional hedging rule showed superior overall performance as compared with the neuro-fuzzy approach.
Optimization-Based Models of Muscle Coordination
Prilutsky, Boris I.; Zatsiorsky, Vladimir M.
2010-01-01
Optimization-based models may provide reasonably accurate estimates of activation and force patterns of individual muscles in selected well-learned tasks with submaximal efforts. Such optimization criteria as minimum energy expenditure, minimum muscle fatigue, and minimum sense of effort seem most promising. PMID:11800497
Optimization-based models of muscle coordination.
Prilutsky, Boris I; Zatsiorsky, Vladimir M
2002-01-01
Optimization-based models may provide reasonably accurate estimates of activation and force patterns of individual muscles in selected well-learned tasks with submaximal efforts. Such optimization criteria as minimum energy expenditure, minimum muscle fatigue, and minimum sense of effort seem most promising. PMID:11800497
Modelling and Optimizing Mathematics Learning in Children
ERIC Educational Resources Information Center
Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus
2013-01-01
This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…
Enhanced index tracking modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. S.; Hj. Jaaman, Saiful Hafizah; Ismail, Hamizun bin
2013-09-01
Enhanced index tracking is a popular form of passive fund management in stock market. It is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the risk. Enhanced index tracking aims to generate excess return over the return achieved by the index without purchasing all of the stocks that make up the index by establishing an optimal portfolio. The objective of this study is to determine the optimal portfolio composition and performance by using weighted model in enhanced index tracking. Weighted model focuses on the trade-off between the excess return and the risk. The results of this study show that the optimal portfolio for the weighted model is able to outperform the Malaysia market index which is Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.
Optimal combinations of specialized conceptual hydrological models
NASA Astrophysics Data System (ADS)
Kayastha, Nagendra; Lal Shrestha, Durga; Solomatine, Dimitri
2010-05-01
In hydrological modelling it is a usual practice to use a single lumped conceptual model for hydrological simulations at all regimes. However often the simplicity of the modelling paradigm leads to errors in represent all the complexity of the physical processes in the catchment. A solution could be to model various hydrological processes separately by differently parameterized models, and to combine them. Different hydrological models have varying performance in reproducing catchment response. Generally it cannot be represented precisely in different segments of the hydrograph: some models performed well in simulating the peak flows, while others do well in capturing the low flows. Better performance can be achieved if a model being applied to the catchment using different model parameters that are calibrated using criteria favoring high or low flows. In this work we use a modular approach to simulate hydrology of a catchment, wherein multiple models are applied to replicate the catchment responses and each "specialist" model is calibrated according to a specific objective function which is chosen in a way that forces the model to capture certain aspects of the hydrograph, and outputs of models are combined using so-called "fuzzy committee". Such multi-model approach has been already previously implemented in the development of data driven and conceptual models (Fenicia et al., 2007), but its perfomance was considered only during the calibration period. In this study we tested an application to conceptual models in both calibration and verification period. In addition, we tested the sensitivity of the result to the use of different weightings used in the objective functions formulations, and memberbship functions used in the committee. The study was carried out for Bagamati catchment in Nepal and Brue catchment in United Kingdoms with the MATLAB-based implementation of HBV model. Multi-objective evolutionary optimization genetic algorithm (Deb, 2001) was used to
An overview of the optimization modelling applications
NASA Astrophysics Data System (ADS)
Singh, Ajay
2012-10-01
SummaryThe optimal use of available resources is of paramount importance in the backdrop of the increasing food, fiber, and other demands of the burgeoning global population and the shrinking resources. The optimal use of these resources can be determined by employing an optimization technique. The comprehensive reviews on the use of various programming techniques for the solution of different optimization problems have been provided in this paper. The past reviews are grouped into nine sections based on the solutions of the theme-based real world problems. The sections include: use of optimization modelling for conjunctive use planning, groundwater management, seawater intrusion management, irrigation management, achieving optimal cropping pattern, management of reservoir systems operation, management of resources in arid and semi-arid regions, solid waste management, and miscellaneous uses which comprise, managing problems of hydropower generation and sugar industry. Conclusions are drawn where gaps exist and more research needs to be focused.
An optimization model of communications satellite planning
NASA Astrophysics Data System (ADS)
Dutta, Amitava; Rama, Dasaratha V.
1992-09-01
A mathematical planning model is developed to help make cost effective decisions on key physical and operational parameters, for a satellite intended to provide customer premises services (CPS). The major characteristics of the model are: (1) interactions and tradeoffs among technical variables are formally captured; (2) values for capacity and operational parameters are obtained through optimization, greatly reducing the need for heuristic choices of parameter values; (3) effects of physical and regulatory constraints are included; and (4) the effects of market prices for transmission capacity on planning variables are explicitly captured. The model is solved optimally using geometric programming methods. Sensitivity analysis yields coefficients, analogous to shadow prices, that quantitatively indicate the change in objective function value resulting from variations in input parameter values. This helps in determining the robustness of planning decisions and in coping with some of the uncertainty that exists at the planning stage. The model can therefore be useful in making economically viable planning decisions for communications satellites.
Model-based optimization of ultrasonic transducers.
Heikkola, Erkki; Laitinen, Mika
2005-01-01
Numerical simulation and automated optimization of Langevin-type ultrasonic transducers are investigated. These kind of transducers are standard components in various applications of high-power ultrasonics such as ultrasonic cleaning and chemical processing. Vibration of the transducer is simulated numerically by standard finite element method and the dimensions and shape parameters of a transducer are optimized with respect to different criteria. The novelty value of this work is the combination of the simulation model and the optimization problem by efficient automatic differentiation techniques. The capabilities of this approach are demonstrated with practical test cases in which various aspects of the operation of a transducer are improved. PMID:15474952
Model test optimization using the virtual environment for test optimization
Klenke, S.E.; Reese, G.M.; Schoof, L.A.; Shierling, C.
1995-11-01
We present a software environment integrating analysis and test-based models to support optimal modal test design through a Virtual Environment for Test Optimization (VETO). The VETO assists analysis and test engineers to maximize the value of each modal test. It is particularly advantageous for structural dynamics model reconciliation applications. The VETO enables an engineer to interact with a finite element model of a test object to optimally place sensors and exciters and to investigate the selection of data acquisition parameters needed to conduct a complete modal survey. Additionally, the user can evaluate the use of different types of instrumentation such as filters, amplifiers and transducers for which models are available in the VETO. The dynamic response of most of the virtual instruments (including the device under test) are modeled in the state space domain. Design of modal excitation levels and appropriate test instrumentation are facilitated by the VETO`s ability to simulate such features as unmeasured external inputs, A/D quantization effects, and electronic noise. Measures of the quality of the experimental design, including the Modal Assurance Criterion, and the Normal Mode Indicator Function are available. The VETO also integrates tools such as Effective Independence and minamac to assist in selection of optimal sensor locations. The software is designed about three distinct modules: (1) a main controller and GUI written in C++, (2) a visualization model, taken from FEAVR, running under AVS, and (3) a state space model and time integration module built in SIMULINK. These modules are designed to run as separate processes on interconnected machines.
Modeling the dynamics of ant colony optimization.
Merkle, Daniel; Middendorf, Martin
2002-01-01
The dynamics of Ant Colony Optimization (ACO) algorithms is studied using a deterministic model that assumes an average expected behavior of the algorithms. The ACO optimization metaheuristic is an iterative approach, where in every iteration, artificial ants construct solutions randomly but guided by pheromone information stemming from former ants that found good solutions. The behavior of ACO algorithms and the ACO model are analyzed for certain types of permutation problems. It is shown analytically that the decisions of an ant are influenced in an intriguing way by the use of the pheromone information and the properties of the pheromone matrix. This explains why ACO algorithms can show a complex dynamic behavior even when there is only one ant per iteration and no competition occurs. The ACO model is used to describe the algorithm behavior as a combination of situations with different degrees of competition between the ants. This helps to better understand the dynamics of the algorithm when there are several ants per iteration as is always the case when using ACO algorithms for optimization. Simulations are done to compare the behavior of the ACO model with the ACO algorithm. Results show that the deterministic model describes essential features of the dynamics of ACO algorithms quite accurately, while other aspects of the algorithms behavior cannot be found in the model. PMID:12227995
Improving Vortex Models via Optimal Control Theory
NASA Astrophysics Data System (ADS)
Hemati, Maziar; Eldredge, Jeff; Speyer, Jason
2012-11-01
Flapping wing kinematics, common in biological flight, can allow for agile flight maneuvers. On the other hand, we currently lack sufficiently accurate low-order models that enable such agility in man-made micro air vehicles. Low-order point vortex models have had reasonable success in predicting the qualitative behavior of the aerodynamic forces resulting from such maneuvers. However, these models tend to over-predict the force response when compared to experiments and high-fidelity simulations, in part because they neglect small excursions of separation from the wing's edges. In the present study, we formulate a constrained minimization problem which allows us to relax the usual edge regularity conditions in favor of empirical determination of vortex strengths. The optimal vortex strengths are determined by minimizing the error with respect to empirical force data, while the vortex positions are constrained to evolve according to the impulse matching model developed in previous work. We consider a flat plate undergoing various canonical maneuvers. The optimized model leads to force predictions remarkably close to the empirical data. Additionally, we compare the optimized and original models in an effort to distill appropriate edge conditions for unsteady maneuvers.
Modeling optimal mineral nutrition for hazelnut micropropagation
Technology Transfer Automated Retrieval System (TEKTRAN)
Micropropagation of hazelnut (Corylus avellana L.) is typically difficult due to the wide variation in response among cultivars. This study was designed to overcome that difficulty by modeling the optimal mineral nutrients for micropropagation of C. avellana selections using a response surface desig...
Generalized mathematical models in design optimization
NASA Technical Reports Server (NTRS)
Papalambros, Panos Y.; Rao, J. R. Jagannatha
1989-01-01
The theory of optimality conditions of extremal problems can be extended to problems continuously deformed by an input vector. The connection between the sensitivity, well-posedness, stability and approximation of optimization problems is steadily emerging. The authors believe that the important realization here is that the underlying basis of all such work is still the study of point-to-set maps and of small perturbations, yet what has been identified previously as being just related to solution procedures is now being extended to study modeling itself in its own right. Many important studies related to the theoretical issues of parametric programming and large deformation in nonlinear programming have been reported in the last few years, and the challenge now seems to be in devising effective computational tools for solving these generalized design optimization models.
Optimal Experimental Design for Model Discrimination
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983
Toward "optimal" integration of terrestrial biosphere models
NASA Astrophysics Data System (ADS)
Schwalm, Christopher R.; Huntzinger, Deborah N.; Fisher, Joshua B.; Michalak, Anna M.; Bowman, Kevin; Ciais, Philippe; Cook, Robert; El-Masri, Bassil; Hayes, Daniel; Huang, Maoyi; Ito, Akihiko; Jain, Atul; King, Anthony W.; Lei, Huimin; Liu, Junjie; Lu, Chaoqun; Mao, Jiafu; Peng, Shushi; Poulter, Benjamin; Ricciuto, Daniel; Schaefer, Kevin; Shi, Xiaoying; Tao, Bo; Tian, Hanqin; Wang, Weile; Wei, Yaxing; Yang, Jia; Zeng, Ning
2015-06-01
Multimodel ensembles (MME) are commonplace in Earth system modeling. Here we perform MME integration using a 10-member ensemble of terrestrial biosphere models (TBMs) from the Multiscale synthesis and Terrestrial Model Intercomparison Project (MsTMIP). We contrast optimal (skill based for present-day carbon cycling) versus naïve ("one model-one vote") integration. MsTMIP optimal and naïve mean land sink strength estimates (-1.16 versus -1.15 Pg C per annum respectively) are statistically indistinguishable. This holds also for grid cell values and extends to gross uptake, biomass, and net ecosystem productivity. TBM skill is similarly indistinguishable. The added complexity of skill-based integration does not materially change MME values. This suggests that carbon metabolism has predictability limits and/or that all models and references are misspecified. Resolving this issue requires addressing specific uncertainty types (initial conditions, structure, and references) and a change in model development paradigms currently dominant in the TBM community.
Optimal Empirical Prognostic Models of Climate Dynamics
NASA Astrophysics Data System (ADS)
Loskutov, E. M.; Mukhin, D.; Gavrilov, A.; Feigin, A. M.
2014-12-01
In this report the empirical methodology for prediction of climate dynamics is suggested. We construct the dynamical models of data patterns connected with climate indices, from observed spatially distributed time series. The models are based on artificial neural network (ANN) parameterization and have a form of discrete stochastic evolution operator mapping some sequence of systems state on the next one [1]. Different approaches to reconstruction of empirical basis (phase variables) for system's phase space representation, which is appropriate for forecasting the climate index of interest, are discussed in the report; for this purpose both linear and non-linear data expansions are considered. The most important point of the methodology is finding the optimal structural parameters of the model such as dimension of variable vector, i.e. number of principal components used for modeling, the time lag used for prediction, and number of neurons in ANN determining the quality of approximation. Actually, we need to solve the model selection problem, i.e. we want to obtain a model of optimal complexity in relation to analyzed time series. We use MDL approach [2] for this purpose: the model providing best data compression is chosen. The method is applied to space-distributed time-series of sea surface temperature and sea level pressure taken from IRI datasets [3]: the ability of proposed models to predict different climate indices (incl. Multivariate ENSO index, Pacific Decadal Oscillation index, North-Atlantic Oscillation index) is investigated. References:1. Molkov Ya. I., E. M. Loskutov, D. N. Mukhin, and A. M. Feigin, Random dynamical models from time series. Phys. Rev. E, 85, 036216, 2012.2. Molkov, Ya.I., D.N. Mukhin, E.M. Loskutov, A.M. Feigin, and G.A. Fidelin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series. Phys. Rev. E, 80, 046207, 2009.3. IRI/LDEO Climate Data Library (http://iridl.ldeo.columbia.edu/)
Optimal hierarchies for fuzzy object models
NASA Astrophysics Data System (ADS)
Matsumoto, Monica M. S.; Udupa, Jayaram K.
2013-03-01
In radiologic clinical practice, the analysis underlying image examinations are qualitative, descriptive, and to some extent subjective. Quantitative radiology (QR) is valuable in clinical radiology. Computerized automatic anatomy recognition (AAR) is an essential step toward that goal. AAR is a body-wide organ recognition strategy. The AAR framework is based on fuzzy object models (FOMs) wherein the models for the different objects are encoded in a hierarchy. We investigated ways of optimally designing the hierarchy tree while building the models. The hierarchy among the objects is a core concept of AAR. The parent-offspring relationships have two main purposes in this context: (i) to bring into AAR more understanding and knowledge about the form, geography, and relationships among objects, and (ii) to foster guidance to object recognition and object delineation. In this approach, the relationship among objects is represented by a graph, where the vertices are the objects (organs) and the edges connect all pairs of vertices into a complete graph. Each pair of objects is assigned a weight described by the spatial distance between them, their intensity profile differences, and their correlation in size, all estimated over a population. The optimal hierarchy tree is obtained by the shortest-path algorithm as an optimal spanning tree. To evaluate the optimal hierarchies, we have performed some preliminary tests involving the subsequent recognition step. The body region used for initial investigation was the thorax.
Probabilistic computer model of optimal runway turnoffs
NASA Technical Reports Server (NTRS)
Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.
1985-01-01
Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.
Global optimization of bilinear engineering design models
Grossmann, I.; Quesada, I.
1994-12-31
Recently Quesada and Grossmann have proposed a global optimization algorithm for solving NLP problems involving linear fractional and bilinear terms. This model has been motivated by a number of applications in process design. The proposed method relies on the derivation of a convex NLP underestimator problem that is used within a spatial branch and bound search. This paper explores the use of alternative bounding approximations for constructing the underestimator problem. These are applied in the global optimization of problems arising in different engineering areas and for which different relaxations are proposed depending on the mathematical structure of the models. These relaxations include linear and nonlinear underestimator problems. Reformulations that generate additional estimator functions are also employed. Examples from process design, structural design, portfolio investment and layout design are presented.
Optimized Null Model for Protein Structure Networks
Lappe, Michael; Pržulj, Nataša
2009-01-01
Much attention has recently been given to the statistical significance of topological features observed in biological networks. Here, we consider residue interaction graphs (RIGs) as network representations of protein structures with residues as nodes and inter-residue interactions as edges. Degree-preserving randomized models have been widely used for this purpose in biomolecular networks. However, such a single summary statistic of a network may not be detailed enough to capture the complex topological characteristics of protein structures and their network counterparts. Here, we investigate a variety of topological properties of RIGs to find a well fitting network null model for them. The RIGs are derived from a structurally diverse protein data set at various distance cut-offs and for different groups of interacting atoms. We compare the network structure of RIGs to several random graph models. We show that 3-dimensional geometric random graphs, that model spatial relationships between objects, provide the best fit to RIGs. We investigate the relationship between the strength of the fit and various protein structural features. We show that the fit depends on protein size, structural class, and thermostability, but not on quaternary structure. We apply our model to the identification of significantly over-represented structural building blocks, i.e., network motifs, in protein structure networks. As expected, choosing geometric graphs as a null model results in the most specific identification of motifs. Our geometric random graph model may facilitate further graph-based studies of protein conformation space and have important implications for protein structure comparison and prediction. The choice of a well-fitting null model is crucial for finding structural motifs that play an important role in protein folding, stability and function. To our knowledge, this is the first study that addresses the challenge of finding an optimized null model for RIGs, by
Modeling and Global Optimization of DNA separation
Fahrenkopf, Max A.; Ydstie, B. Erik; Mukherjee, Tamal; Schneider, James W.
2014-01-01
We develop a non-convex non-linear programming problem that determines the minimum run time to resolve different lengths of DNA using a gel-free micelle end-labeled free solution electrophoresis separation method. Our optimization framework allows for efficient determination of the utility of different DNA separation platforms and enables the identification of the optimal operating conditions for these DNA separation devices. The non-linear programming problem requires a model for signal spacing and signal width, which is known for many DNA separation methods. As a case study, we show how our approach is used to determine the optimal run conditions for micelle end-labeled free-solution electrophoresis and examine the trade-offs between a single capillary system and a parallel capillary system. Parallel capillaries are shown to only be beneficial for DNA lengths above 230 bases using a polydisperse micelle end-label otherwise single capillaries produce faster separations. PMID:24764606
Optimizing electroslag cladding with finite element modeling
Li, M.V.; Atteridge, D.G.; Meekisho, L.
1996-12-31
Electroslag cladding of nickel alloys onto carbon steel propeller shafts was optimized in terms of interpass temperatures. A two dimensional finite element model was used in this study to analyze the heat transfer induced by multipass electroslag cladding. Changes of interpass temperatures during a cladding experiment with uniform initial temperature distribution on a section of shaft were first simulated. It was concluded that uniform initial temperature distribution would lead to interpass temperatures out of the optimal range if continuous cladding is expected. The difference in the cooling conditions among experimental and full size shafts and its impact on interpass temperatures during the cladding were discussed. Electroslag cladding onto a much longer shaft, virtually an semi infinite long shaft, was analyzed with specific reference to the practical applications of electroslag cladding. Optimal initial preheating temperature distribution was obtained for continuous cladding on full size shafts which would keep the interpass temperatures within the required range.
Modeling and Global Optimization of DNA separation.
Fahrenkopf, Max A; Ydstie, B Erik; Mukherjee, Tamal; Schneider, James W
2014-05-01
We develop a non-convex non-linear programming problem that determines the minimum run time to resolve different lengths of DNA using a gel-free micelle end-labeled free solution electrophoresis separation method. Our optimization framework allows for efficient determination of the utility of different DNA separation platforms and enables the identification of the optimal operating conditions for these DNA separation devices. The non-linear programming problem requires a model for signal spacing and signal width, which is known for many DNA separation methods. As a case study, we show how our approach is used to determine the optimal run conditions for micelle end-labeled free-solution electrophoresis and examine the trade-offs between a single capillary system and a parallel capillary system. Parallel capillaries are shown to only be beneficial for DNA lengths above 230 bases using a polydisperse micelle end-label otherwise single capillaries produce faster separations. PMID:24764606
Combined optimization model for sustainable energization strategy
NASA Astrophysics Data System (ADS)
Abtew, Mohammed Seid
Access to energy is a foundation to establish a positive impact on multiple aspects of human development. Both developed and developing countries have a common concern of achieving a sustainable energy supply to fuel economic growth and improve the quality of life with minimal environmental impacts. The Least Developing Countries (LDCs), however, have different economic, social, and energy systems. Prevalence of power outage, lack of access to electricity, structural dissimilarity between rural and urban regions, and traditional fuel dominance for cooking and the resultant health and environmental hazards are some of the distinguishing characteristics of these nations. Most energy planning models have been designed for developed countries' socio-economic demographics and have missed the opportunity to address special features of the poor countries. An improved mixed-integer programming energy-source optimization model is developed to address limitations associated with using current energy optimization models for LDCs, tackle development of the sustainable energization strategies, and ensure diversification and risk management provisions in the selected energy mix. The Model predicted a shift from traditional fuels reliant and weather vulnerable energy source mix to a least cost and reliable modern clean energy sources portfolio, a climb on the energy ladder, and scored multifaceted economic, social, and environmental benefits. At the same time, it represented a transition strategy that evolves to increasingly cleaner energy technologies with growth as opposed to an expensive solution that leapfrogs immediately to the cleanest possible, overreaching technologies.
ITER central solenoid model coil impregnation optimization
NASA Astrophysics Data System (ADS)
Schutz, J. B.; Munshi, N. A.; Smith, K. B.
The success of the vacuum-pressure impregnation of the International Thermonuclear Experimental Reactor central solenoid is critical to success of the magnet system. Analysis of fluid flow through a fabric bed is extremely complicated, and complete analytical solutions are not available, but semiempirical methods can be adapted to model these flows. Several of these models were evaluated to predict the impregnation characteristics of a liquid resin through a mat of reinforcing glass fabric, and an experiment was performed to validate these models. The effects of applied pressure differential, glass fibre volume fraction, resin viscosity and impregnation time were examined analytically. From the results of this optimization, it is apparent that use of elevated processing temperature resin systems offer significant advantages in large scale impregnation due to their lower viscosity and longer working life, and they may be essential for large scale impregnations.
Centerline optimization using vessel quantification model
NASA Astrophysics Data System (ADS)
Cai, Wenli; Dachille, Frank; Meissner, Michael
2005-04-01
An accurate and reproducible centerline is needed in many vascular applications, such as virtual angioscopy, vessel quantification, and surgery planning. This paper presents a progressive optimization algorithm to refine a centerline after it is extracted. A new centerline model definition is proposed that allows quantifiable minimum cross-sectional area. A centerline is divided into a number of segments. Each segment corresponds to a local generalized cylinder. A reference frame (cross-section) is set up at the center point of each cylinder. The position and the orientation of the cross-section are optimized within each cylinder by finding the minimum cross-sectional area. All local-optimized center points are approximated by a NURBS curve globally, and the curve is re-sampled to the refined set of center points. This refinement iteration, local optimization plus global approximation, converges to the optimal centerline, yielding a smooth and accurate central axis curve. The application discussed in this paper is vessel quantification and virtual angioscopy. However, the algorithm is a general centerline refinement method that can be applied to other applications that need accurate and reproducible centerlines.
Parameter optimization in S-system models
Vilela, Marco; Chou, I-Chun; Vinga, Susana; Vasconcelos, Ana Tereza R; Voit, Eberhard O; Almeida, Jonas S
2008-01-01
Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well. PMID:18416837
Modeling, Analysis, and Optimization Issues for Large Space Structures
NASA Technical Reports Server (NTRS)
Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)
1983-01-01
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.
Extremal Optimization for p-Spin Models
NASA Astrophysics Data System (ADS)
Falkner, Stefan; Boettcher, Stefan
2012-02-01
It was shown recently that finding ground states in the 3-spin model on a 2d dimensional triangular lattice poses an NP-hard problem [1]. We use the extremal optimization (EO) heuristic [2] to explore ground state energies and finite-size scaling corrections [3]. EO predicts the thermodynamic ground state energy with high accuracy, based on the observation that finite size corrections appear to decay purely with system size. Just as found in 3-spin models on r-regular graphs, there are no noticeable anomalous corrections to these energies. Interestingly, the results are sufficiently accurate to detect alternating patters in the energies when the lattice size L is divisible by 6. Although ground states seem very prolific and might seem easy to obtain with simple greedy algorithms, our tests show significant improvement in the data with EO. [4pt] [1] PRE 83 (2011) 046709,[2] PRL 86 (2001) 5211,[3] S. Boettcher and S. Falkner (in preparation).
Optimal evolution models for quantum tomography
NASA Astrophysics Data System (ADS)
Czerwiński, Artur
2016-02-01
The research presented in this article concerns the stroboscopic approach to quantum tomography, which is an area of science where quantum physics and linear algebra overlap. In this article we introduce the algebraic structure of the parametric-dependent quantum channels for 2-level and 3-level systems such that the generator of evolution corresponding with the Kraus operators has no degenerate eigenvalues. In such cases the index of cyclicity of the generator is equal to 1, which physically means that there exists one observable the measurement of which performed a sufficient number of times at distinct instants provides enough data to reconstruct the initial density matrix and, consequently, the trajectory of the state. The necessary conditions for the parameters and relations between them are introduced. The results presented in this paper seem to have considerable potential applications in experiments due to the fact that one can perform quantum tomography by conducting only one kind of measurement. Therefore, the analyzed evolution models can be considered optimal in the context of quantum tomography. Finally, we introduce some remarks concerning optimal evolution models in the case of n-dimensional Hilbert space.
Designing Sensor Networks by a Generalized Highly Optimized Tolerance Model
NASA Astrophysics Data System (ADS)
Miyano, Takaya; Yamakoshi, Miyuki; Higashino, Sadanori; Tsutsui, Takako
A variant of the highly optimized tolerance model is applied to a toy problem of bioterrorism to determine the optimal arrangement of hypothetical bio-sensors to avert epidemic outbreak. Nonlinear loss function is utilized in searching the optimal structure of the sensor network. The proposed method successfully averts disastrously large events, which can not be achieved by the original highly optimized tolerance model.
Application of simulation models for the optimization of business processes
NASA Astrophysics Data System (ADS)
Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří
2016-06-01
The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.
Model Identification for Optimal Diesel Emissions Control
Stevens, Andrew J.; Sun, Yannan; Song, Xiaobo; Parker, Gordon
2013-06-20
In this paper we develop a model based con- troller for diesel emission reduction using system identification methods. Specifically, our method minimizes the downstream readings from a production NOx sensor while injecting a minimal amount of urea upstream. Based on the linear quadratic estimator we derive the closed form solution to a cost function that accounts for the case some of the system inputs are not controllable. Our cost function can also be tuned to trade-off between input usage and output optimization. Our approach performs better than a production controller in simulation. Our NOx conversion efficiency was 92.7% while the production controller achieved 92.4%. For NH3 conversion, our efficiency was 98.7% compared to 88.5% for the production controller.
Optimization approaches to nonlinear model predictive control
Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)
1991-01-01
With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.
Optimized Markov state models for metastable systems
NASA Astrophysics Data System (ADS)
Guarnera, Enrico; Vanden-Eijnden, Eric
2016-07-01
A method is proposed to identify target states that optimize a metastability index amongst a set of trial states and use these target states as milestones (or core sets) to build Markov State Models (MSMs). If the optimized metastability index is small, this automatically guarantees the accuracy of the MSM, in the sense that the transitions between the target milestones is indeed approximately Markovian. The method is simple to implement and use, it does not require that the dynamics on the trial milestones be Markovian, and it also offers the possibility to partition the system's state-space by assigning every trial milestone to the target milestones it is most likely to visit next and to identify transition state regions. Here the method is tested on the Gly-Ala-Gly peptide, where it is shown to correctly identify the expected metastable states in the dihedral angle space of the molecule without a priori information about these states. It is also applied to analyze the folding landscape of the Beta3s mini-protein, where it is shown to identify the folded basin as a connecting hub between an helix-rich region, which is entropically stabilized, and a beta-rich region, which is energetically stabilized and acts as a kinetic trap.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.
Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs
NASA Technical Reports Server (NTRS)
Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.
1998-01-01
This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Integrative systems modeling and multi-objective optimization
This presentation presents a number of algorithms, tools, and methods for utilizing multi-objective optimization within integrated systems modeling frameworks. We first present innovative methods using a genetic algorithm to optimally calibrate the VELMA and SWAT ecohydrological ...
Quantitative Modeling and Optimization of Magnetic Tweezers
Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.
2009-01-01
Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664
Optimal estimator model for human spatial orientation
NASA Technical Reports Server (NTRS)
Borah, J.; Young, L. R.; Curry, R. E.
1979-01-01
A model is being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli. Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors. Central nervous system function is then modeled as a steady-state Kalman filter which blends information from the various sensors to form an estimate of spatial orientation. Where necessary, this linear central estimator has been augmented with nonlinear elements to reflect more accurately some highly nonlinear human response characteristics. Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended. Possible means are described for extending the model to better represent the active pilot with varying skill and work load levels.
A MILP-Model for the Optimization of Transports
NASA Astrophysics Data System (ADS)
Björk, Kaj-Mikael
2010-09-01
This paper presents a work in developing a mathematical model for the optimization of transports. The decisions to be made are routing decisions, truck assignment and the determination of the pickup order for a set of loads and available trucks. The model presented takes these aspects into account simultaneously. The MILP model is implemented in the Microsoft Excel environment, utilizing the LP-solve freeware as the optimization engine and Visual Basic for Applications as the modeling interface.
Optimal Scaling of Interaction Effects in Generalized Linear Models
ERIC Educational Resources Information Center
van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.
2009-01-01
Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…
On Optimal Input Design and Model Selection for Communication Channels
Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M
2013-01-01
In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
Stochastic Robust Mathematical Programming Model for Power System Optimization
Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Optimal Model Discovery of Periodic Variable Stars
NASA Astrophysics Data System (ADS)
Bellinger, Earl Patrick; Kanbur, Shashi; Wysocki, Daniel
2015-01-01
Precision modeling of periodic variable stars is important for various pursuits such as establishing the extragalactic distance scale and measuring the Hubble constant. Many difficulties exist, however, when attempting to model the light curves of these objects, as photometric observations of variable stars tend to be noisy and sparsely sampled. As a consequence, existing methods commonly fail to produce models that accurately describe their light curves. In this talk, I introduce a new machine learning approach for modeling light curves of periodic variables that is robust to the presence of these effects. I demonstrate this method on fifty thousand Cepehid and RR Lyrae variable stars in the galaxy as well as the Magellanic Clouds and show that it significantly outperforms existing methods.
A new algorithm for L2 optimal model reduction
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Milman, M. H.; Mingori, D. L.
1992-01-01
In this paper the quadratically optimal model reduction problem for single-input, single-output systems is considered. The reduced order model is determined by minimizing the integral of the magnitude-squared of the transfer function error. It is shown that the numerator coefficients of the optimal approximant satisfy a weighted least squares problem and, on this basis, a two-step iterative algorithm is developed combining a least squares solver with a gradient minimizer. Convergence of the proposed algorithm to stationary values of the quadratic cost function is proved. The formulation is extended to handle the frequency-weighted optimal model reduction problem. Three examples demonstrate the optimization algorithm.
Mathematical Model For Engineering Analysis And Optimization
NASA Technical Reports Server (NTRS)
Sobieski, Jaroslaw
1992-01-01
Computational support for engineering design process reveals behavior of designed system in response to external stimuli; and finds out how behavior modified by changing physical attributes of system. System-sensitivity analysis combined with extrapolation forms model of design complementary to model of behavior, capable of direct simulation of effects of changes in design variables. Algorithms developed for this method applicable to design of large engineering systems, especially those consisting of several subsystems involving many disciplines.
An optimization strategy for a biokinetic model of inhaled radionuclides
Shyr, L.J.; Griffith, W.C.; Boecker, B.B. )
1991-04-01
Models for material disposition and dosimetry involve predictions of the biokinetics of the material among compartments representing organs and tissues in the body. Because of a lack of human data for most toxicants, many of the basic data are derived by modeling the results obtained from studies using laboratory animals. Such a biomathematical model is usually developed by adjusting the model parameters to make the model predictions match the measured retention and excretion data visually. The fitting process can be very time-consuming for a complicated model, and visual model selections may be subjective and easily biased by the scale or the data used. Due to the development of computerized optimization methods, manual fitting could benefit from an automated process. However, for a complicated model, an automated process without an optimization strategy will not be efficient, and may not produce fruitful results. In this paper, procedures for, and implementation of, an optimization strategy for a complicated mathematical model is demonstrated by optimizing a biokinetic model for 144Ce in fused aluminosilicate particles inhaled by beagle dogs. The optimized results using SimuSolv were compared to manual fitting results obtained previously using the model simulation software GASP. Also, statistical criteria provided by SimuSolv, such as likelihood function values, were used to help or verify visual model selections.
Optimal control of a delayed SLBS computer virus model
NASA Astrophysics Data System (ADS)
Chen, Lijuan; Hattaf, Khalid; Sun, Jitao
2015-06-01
In this paper, a delayed SLBS computer virus model is firstly proposed. To the best of our knowledge, this is the first time to discuss the optimal control of the SLBS model. By using the optimal control strategy, we present an optimal strategy to minimize the total number of the breakingout computers and the cost associated with toxication or detoxication. We show that an optimal control solution exists for the control problem. Some examples are presented to show the efficiency of this optimal control.
Hierarchical models and iterative optimization of hybrid systems
NASA Astrophysics Data System (ADS)
Rasina, Irina V.; Baturina, Olga V.; Nasatueva, Soelma N.
2016-06-01
A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.
Multipurpose optimization models for high level waste vitrification
Hoza, M.
1994-08-01
Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification.
Optimization modeling for industrial waste reduction planning
Roberge, H.D.; Baetz, B.W. . Dept. of Civil Engineering)
1994-01-01
A model is developed for planning the implementation of industrial waste reduction and waste management strategies. The model is based on minimizing the overall cost of waste reduction and waste management for an industrial facility over a certain time period. The problem is formulated as a general mixed integer linear programming (MILP) problem, where the objective function includes capital and operating costs and is subject to a number of constraints that define the system under consideration. The information required to use the modeling approach includes the capital and operating costs of the various options being considered, discount rates, escalation factors, the capacity limitations on various options for waste treatment, disposal and management, as well as treatment efficiencies and the potential for waste reduction. The general modeling approach is applied to a case study facility. The MILP formulation was solved using a commercially available software package. The model could be used by an environmental engineer or a planner in an industry that is conserving implementing waste reduction projects. Ideally, the industry would have generated information on modifications that could reduce their waste generation, as well as information on their current waste management practices. In the event that specific waste reduction projects have not been identified, the economic feasibility of potential future projects could be determined.
Geomagnetic field modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
1980-01-01
Five individual 5 year mini-batch geomagnetic models were generated and two computer programs were developed to process the models. The first program computes statistics (mean sigma, weighted sigma) on the changes in the first derivatives (linear terms) of the spherical harmonic coefficients between mini-batches. The program ran successfully. The statistics are intended for use in computing the state noise matrix required in the information filter. The second program is the information filter. Most subroutines used in the filter were tested, but the coefficient statistics must be analyzed before the filter is run.
Optimizing glassy p-spin models
NASA Astrophysics Data System (ADS)
Thomas, Creightonk.; Katzgraber, Helmutg.
2011-04-01
Computing the ground state of Ising spin-glass models with p-spin interactions is, in general, an NP-hard problem. In this work we show that unlike in the case of the standard Ising spin glass with two-spin interactions, computing ground states with p=3 is an NP-hard problem even in two space dimensions. Furthermore, we present generic exact and heuristic algorithms for finding ground states of p-spin models with high confidence for systems of up to several thousand spins.
COBRA-SFS modifications and cask model optimization
Rector, D.R.; Michener, T.E.
1989-01-01
Spent-fuel storage systems are complex systems and developing a computational model for one can be a difficult task. The COBRA-SFS computer code provides many capabilities for modeling the details of these systems, but these capabilities can also allow users to specify a more complex model than necessary. This report provides important guidance to users that dramatically reduces the size of the model while maintaining the accuracy of the calculation. A series of model optimization studies was performed, based on the TN-24P spent-fuel storage cask, to determine the optimal model geometry. Expanded modeling capabilities of the code are also described. These include adding fluid shear stress terms and a detailed plenum model. The mathematical models for each code modification are described, along with the associated verification results. 22 refs., 107 figs., 7 tabs.
Multi-objective parameter optimization of common land model using adaptive surrogate modelling
NASA Astrophysics Data System (ADS)
Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.
2014-06-01
Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20-100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) regional LSMs are expensive to run, while conventional multi-objective optimization methods needs a huge number of model runs (typically 105~106). It makes parameter optimization computationally prohibitive. An uncertainty qualification framework was developed to meet the aforementioned challenges: (1) use parameter screening to reduce the number of adjustable parameters; (2) use surrogate models to emulate the response of dynamic models to the variation of adjustable parameters; (3) use an adaptive strategy to promote the efficiency of surrogate modeling based optimization; (4) use a weighting function to transfer multi-objective optimization to single objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column case study of a land surface model - Common Land Model (CoLM) and evaluate the effectiveness and efficiency of the proposed framework. The result indicated that this framework can achieve optimal parameter set using totally 411 model runs, and worth to be extended to other large complex dynamic models, such as regional land surface models, atmospheric models and climate models.
An integrated model for optimizing weld quality
Zacharia, T.; Radhakrishnan, B.; Paul, A.J.; Cheng, C.
1995-06-01
Welding has evolved in the last few decades from almost an empirical art to an activity embodying the most advanced tools of, various basic and applied sciences. Significant progress has been made in understanding the welding process and welded materials. The improved knowledge base has been useful in automation and process control. In view of the large number of variables involved, creating an adequately large database to understand and control the welding process is expensive and time consuming, if not impractical. A recourse is to simulate welding processes through a set of mathematical equations representing the essential physical processes of welding. Results obtained from the phenomenological models depend crucially on the quality of the physical relations in the models and the trustworthiness of input data. In this paper, recent advances in the mathematical modeling of fundamental phenomena in welds are summarized. State of the art mathematical models, advances in computational techniques, emerging high performance computers, and experimental validation techniques have provided significant insight into the fundamental factors that control the development of the weldment. Current status and scientific issues in heat and fluid flow in welds, heat source metal interaction, and solidification microstructure are assessed. Future research areas of major importance for understanding the fundamental phenomena in weld behavior are identified.
Optimal Experimental Design for Model Discrimination
ERIC Educational Resources Information Center
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…
Optimal model reduction and frequency-weighted extension
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Milman, M. H.; Mingori, D. L.
1990-01-01
In this paper the quadratically optimal model reduction problem for single-input, single-output systems is considered. The reduced order model is determined by minimizing the integral of the magnitude-squared of the transfer function error. It is shown that the numerator coefficients of the optimal approximant satisfy a weighted least squares problem and, on this basis, a two-step iterative algorithm is developed combining a least squares solver with a gradient minimizer. The existence of globally optimal stable solutions to the optimization problem is established, and convergence of the algorithm to stationary values of the cost function is proved. The formulation is extended to handle the frequency-weighted optimal model reduction problem. Three examples demonstrate the optimization algorithm.
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models.
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
Nonlinear model predictive control based on collective neurodynamic optimization.
Yan, Zheng; Wang, Jun
2015-04-01
In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach. PMID:25608315
Optimal Complexity of Nonlinear Rainfall-Runoff Models
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J.; van de Giesen, N.; Fenicia, F.
2008-12-01
Identification of an appropriate level of model complexity to accurately translate rainfall into runoff remains an unresolved issue. The model has to be complex enough to generate accurate predictions, but not too complex such that its parameters cannot be reliably estimated from the data. Earlier work with linear models (Jakeman and Hornberger, 1993) concluded that a model with 4 to 5 parameters is sufficient. However, more recent results with a nonlinear model (Vrugt et al., 2006) suggest that 10 or more parameters may be identified from daily rainfall-runoff time-series. The goal here is to systematically investigate optimal complexity of nonlinear rainfall-runoff models, yielding accurate models with identifiable parameters. Our methodology consists of four steps: (i) a priori specification of a family of model structures from which to pick an optimal one, (ii) parameter optimization of each model structure to estimate empirical or calibration error, (iii) estimation of parameter uncertainty of each calibrated model structure, and (iv) estimation of prediction error of each calibrated model structure. For the first step we formulate a flexible model structure that allows us to systematically vary the complexity with which physical processes are simulated. The second and third steps are achieved using a recently developed Markov chain Monte Carlo algorithm (DREAM), which minimizes calibration error yielding optimal parameter values and their underlying posterior probability density function. Finally, we compare several methods for estimating prediction error of each model structure, including statistical methods based on information criteria and split-sample calibration-validation. Estimates of parameter uncertainty and prediction error are then used to identify optimal complexity for rainfall-runoff modeling, using data from dry and wet MOPEX catchments as case studies.
Life cycle optimization of automobile replacement: model and application.
Kim, Hyung Chul; Keoleian, Gregory A; Grande, Darby E; Bean, James C
2003-12-01
Although recent progress in automotive technology has reduced exhaust emissions per mile for new cars, the continuing use of inefficient, higher-polluting old cars as well as increasing vehicle miles driven are undermining the benefits of this progress. As a way to address the "inefficient old vehicle" contribution to this problem, a novel life cycle optimization (LCO) model is introduced and applied to the automobile replacement policy question. The LCO model determines optimal vehicle lifetimes, accounting for technology improvements of new models while considering deteriorating efficiencies of existing models. Life cycle inventories for different vehicle models that represent materials production, manufacturing, use, maintenance, and end-of-life environmental burdens are required as inputs to the LCO model. As a demonstration, the LCO model was applied to mid-sized passenger car models between 1985 and 2020. An optimization was conducted to minimize cumulative carbon monoxide (CO), non-methane hydrocarbon (NMHC), oxides of nitrogen (NOx), carbon dioxide (CO2), and energy use over the time horizon (1985-2020). For CO, NMHC, and NOx pollutants with 12000 mi of annual mileage, automobile lifetimes ranging from 3 to 6 yr are optimal for the 1980s and early 1990s model years while the optimal lifetimes are expected to be 7-14 yr for model year 2000s and beyond. On the other hand, a lifetime of 18 yr minimizes cumulative energy and CO2 based on driving 12000 miles annually. Optimal lifetimes are inversely correlated to annual vehicle mileage, especially for CO, NMHC, and NOx emissions. On the basis of the optimization results, policies improving durability of emission controls, retiring high-emitting vehicles, and improving fuel economies are discussed. PMID:14700326
First-Order Frameworks for Managing Models in Engineering Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natlia M.; Lewis, Robert Michael
2000-01-01
Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.
Modeling, Instrumentation, Automation, and Optimization of Water Resource Recovery Facilities.
Sweeney, Michael W; Kabouris, John C
2016-10-01
A review of the literature published in 2015 on topics relating to water resource recovery facilities (WRRF) in the areas of modeling, automation, measurement and sensors and optimization of wastewater treatment (or water resource reclamation) is presented. PMID:27620091
Research on web performance optimization principles and models
NASA Astrophysics Data System (ADS)
Wang, Xin
2013-03-01
The Internet high speed development, causes Web the optimized question to be getting more and more prominent, therefore the Web performance optimizes into inevitably. the first principle of Web Performance Optimization is to understand, to know that income will have to pay, and return is diminishing; Simultaneously the probability will decrease Web the performance, and will start from the highest level to optimize obtained biggest. Web Technical models to improve the performance are: sharing costs, high-speed caching, profiles, parallel processing, simplified treatment. Based on this study, given the crucial Web performance optimization recommendations, which improve the performance of Web usage, accelerate the efficient use of Internet has an important significance.
An optimization model of a New Zealand dairy farm.
Doole, Graeme J; Romera, Alvaro J; Adler, Alfredo A
2013-04-01
Optimization models are a key tool for the analysis of emerging policies, prices, and technologies within grazing systems. A detailed, nonlinear optimization model of a New Zealand dairy farming system is described. This framework is notable for its inclusion of pasture residual mass, pasture utilization, and intake regulation as key management decisions. Validation of the model shows that the detailed representation of key biophysical relationships in the model provides an enhanced capacity to provide reasonable predictions outside of calibrated scenarios. Moreover, the flexibility of management plans in the model enhances its stability when faced with significant perturbations. In contrast, the inherent rigidity present in a less-detailed linear programming model is shown to limit its capacity to provide reasonable predictions away from the calibrated baseline. A sample application also demonstrates how the model can be used to identify pragmatic strategies to reduce greenhouse gas emissions. PMID:23415534
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
An Optimality-Based Fully-Distributed Watershed Ecohydrological Model
NASA Astrophysics Data System (ADS)
Chen, L., Jr.
2015-12-01
Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial
Jet Pump Design Optimization by Multi-Surrogate Modeling
NASA Astrophysics Data System (ADS)
Mohan, S.; Samad, A.
2015-01-01
A basic approach to reduce the design and optimization time via surrogate modeling is to select a right type of surrogate model for a particular problem, where the model should have better accuracy and prediction capability. A multi-surrogate approach can protect a designer to select a wrong surrogate having high uncertainty in the optimal zone of the design space. Numerical analysis and optimization of a jet pump via multi-surrogate modeling have been reported in this work. Design variables including area ratio, mixing tube length to diameter ratio and setback ratio were introduced to increase the hydraulic efficiency of the jet pump. Reynolds-averaged Navier-Stokes equations were solved and responses were computed. Among different surrogate models, Sheppard function based surrogate shows better accuracy in data fitting while the radial basis neural network produced highest enhanced efficiency. The efficiency enhancement was due to the reduction of losses in the flow passage.
Portfolio optimization for index tracking modelling in Malaysia stock market
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Ismail, Hamizun
2016-06-01
Index tracking is an investment strategy in portfolio management which aims to construct an optimal portfolio to generate similar mean return with the stock market index mean return without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using the optimization model which adopts regression approach in tracking the benchmark stock market index return. In this study, the data consists of weekly price of stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2013. The results of this study show that the optimal portfolio is able to track FBMKLCI Index at minimum tracking error of 1.0027% with 0.0290% excess mean return over the mean return of FBMKLCI Index. The significance of this study is to construct the optimal portfolio using optimization model which adopts regression approach in tracking the stock market index without purchasing all index components.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Optimization of a new mathematical model for bacterial growth
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this research is to optimize a new mathematical equation as a primary model to describe the growth of bacteria under constant temperature conditions. An optimization algorithm was used in combination with a numerical (Runge-Kutta) method to solve the differential form of the new gr...
Groundwater modeling and remedial optimization design using graphical user interfaces
Deschaine, L.M.
1997-05-01
The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.
Optimizing Classroom Acoustics Using Computer Model Studies.
ERIC Educational Resources Information Center
Reich, Rebecca; Bradley, John
1998-01-01
Investigates conditions relating to the maximum useful-to-detrimental sound ratios present in classrooms and determining the optimum conditions for speech intelligibility. Reveals that speech intelligibility is more strongly influenced by ambient noise levels and that the optimal location for sound absorbing material is on a classroom's upper…
Contingency contractor optimization. Phase 3, model description and formulation.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Durfee, Justin D.; Jones, Dean A.; Martin, Nathaniel; Detry, Richard Joseph; Nanco, Alan Stewart; Nozick, Linda Karen
2013-10-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Contingency contractor optimization. phase 3, model description and formulation.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Detry, Richard Joseph; Durfee, Justin D.; Jones, Dean A.; Martin, Nathaniel; Nanco, Alan Stewart; Nozick, Linda Karen
2013-06-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Optimal vaccination and treatment of an epidemic network model
NASA Astrophysics Data System (ADS)
Chen, Lijuan; Sun, Jitao
2014-08-01
In this Letter, we firstly propose an epidemic network model incorporating two controls which are vaccination and treatment. For the constant controls, by using Lyapunov function, global stability of the disease-free equilibrium and the endemic equilibrium of the model is investigated. For the non-constant controls, by using the optimal control strategy, we discuss an optimal strategy to minimize the total number of the infected and the cost associated with vaccination and treatment. Table 1 and Figs. 1-5 are presented to show the global stability and the efficiency of this optimal control.
Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization
NASA Astrophysics Data System (ADS)
Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane
2003-01-01
The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.
Fuzzy multiobjective models for optimal operation of a hydropower system
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.
2013-06-01
Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.
Abstract models for the synthesis of optimization algorithms.
NASA Technical Reports Server (NTRS)
Meyer, G. G. L.; Polak, E.
1971-01-01
Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.
Assessment of optimized Markov models in protein fold classification.
Lampros, Christos; Simos, Thomas; Exarchos, Themis P; Exarchos, Konstantinos P; Papaloukas, Costas; Fotiadis, Dimitrios I
2014-08-01
Protein fold classification is a challenging task strongly associated with the determination of proteins' structure. In this work, we tested an optimization strategy on a Markov chain and a recently introduced Hidden Markov Model (HMM) with reduced state-space topology. The proteins with unknown structure were scored against both these models. Then the derived scores were optimized following a local optimization method. The Protein Data Bank (PDB) and the annotation of the Structural Classification of Proteins (SCOP) database were used for the evaluation of the proposed methodology. The results demonstrated that the fold classification accuracy of the optimized HMM was substantially higher compared to that of the Markov chain or the reduced state-space HMM approaches. The proposed methodology achieved an accuracy of 41.4% on fold classification, while Sequence Alignment and Modeling (SAM), which was used for comparison, reached an accuracy of 38%. PMID:25152041
AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT
NASA Astrophysics Data System (ADS)
Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi
In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.
Model Assessment and Optimization Using a Flow Time Transformation
NASA Astrophysics Data System (ADS)
Smith, T. J.; Marshall, L. A.; McGlynn, B. L.
2012-12-01
Hydrologic modeling is a particularly complex problem that is commonly confronted with complications due to multiple dominant streamflow states, temporal switching of streamflow generation mechanisms, and dynamic responses to model inputs based on antecedent conditions. These complexities can inhibit the development of model structures and their fitting to observed data. As a result of these complexities and the heterogeneity that can exist within a catchment, optimization techniques are typically employed to obtain reasonable estimates of model parameters. However, when calibrating a model, the cost function itself plays a large role in determining the "optimal" model parameters. In this study, we introduce a transformation that allows for the estimation of model parameters in the "flow time" domain. The flow time transformation dynamically weights streamflows in the time domain, effectively stretching time during high streamflows and compressing time during low streamflows. Given the impact of cost functions on model optimization, such transformations focus on the hydrologic fluxes themselves rather than on equal time weighting common to traditional approaches. The utility of such a transform is of particular note to applications concerned with total hydrologic flux (water resources management, nutrient loading, etc.). The flow time approach can improve the predictive consistency of total fluxes in hydrologic models and provide insights into model performance by highlighting model strengths and deficiencies in an alternate modeling domain. Flow time transformations can also better remove positive skew from the streamflow time series, resulting in improved model fits, satisfaction of the normality assumption of model residuals, and enhanced uncertainty quantification. We illustrate the value of this transformation for two distinct sets of catchment conditions (snow-dominated and subtropical).
A flow path model for regional water distribution optimization
NASA Astrophysics Data System (ADS)
Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.
2009-09-01
We develop a flow path model for the optimization of a regional water distribution system. The model simultaneously describes a water distribution system in two parts: (1) the water delivery relationship between suppliers and receivers and (2) the physical water delivery network. In the first part, the model considers waters from different suppliers as multiple commodities. This helps the model clearly describe water deliveries by identifying the relationship between suppliers and receivers. The physical part characterizes a physical water distribution network by all possible flow paths. The flow path model can be used to optimize not only the suppliers to each receiver but also their associated flow paths for supplying water. This characteristic leads to the optimum solution that contains the optimal scheduling results and detailed information concerning water distribution in the physical system. That is, the water rights owner, water quantity, water location, and associated flow path of each delivery action are represented explicitly in the results rather than merely as an optimized total flow quantity in each arc of a distribution network. We first verify the proposed methodology on a hypothetical water distribution system. Then we apply the methodology to the water distribution system associated with the Tou-Qian River basin in northern Taiwan. The results show that the flow path model can be used to optimize the quantity of each water delivery, the associated flow path, and the water trade and transfer strategy.
Optimization of murine model for Besnoitia caprae.
Oryan, A; Sadoughifar, R; Namavari, M
2016-09-01
It has been shown that mice, particularly the BALB/c ones, are susceptible to infection by some of the apicomplexan parasites. To compare the susceptibility of the inbred BALB/c, outbred BALB/c and C57 BL/6 to Besnoitia caprae inoculation and to determine LD50, 30 male inbred BALB/c, 30 outbred BALB/c and 30 C57 BL/6 mice were assigned into 18 groups of 5 mice. Each group was inoculated intraperitoneally with 12.5 × 10(3), 25 × 10(3), 5 × 10(4), 1 × 10(5), 2 × 10(5) tachyzoites and a control inoculum of DMEM, respectively. The inbred BALB/c was found the most susceptible strain among the experienced mice strains so the LD50 per inbred BALB/c mouse was calculated as 12.5 × 10(3.6) tachyzoites while the LD50 for the outbred BALB/c and C57 BL/6 was 25 × 10(3.4) and 5 × 10(4) tachyzoites per mouse, respectively. To investigate the impact of different routes of inoculation in the most susceptible mice strain, another seventy five male inbred BALB/c mice were inoculated with 2 × 10(5) tachyzoites of B. caprae via various inoculation routes including: subcutaneous, intramuscular, intraperitoneal, infraorbital and oral. All the mice in the oral and infraorbital groups survived for 60 days, whereas the IM group showed quicker death and more severe pathologic lesions, which was then followed by SC and IP groups. Therefore, BALB/c mouse is a proper laboratory model and IM inoculation is an ideal method in besnoitiosis induction and a candidate in treatment, prevention and testing the efficacy of vaccines for besnoitiosis. PMID:27605770
Optimal calibration method for water distribution water quality model.
Wu, Zheng Yi
2006-01-01
A water quality model is to predict water quality transport and fate throughout a water distribution system. The model is not only a promising alternative for analyzing disinfectant residuals in a cost-effective manner, but also a means of providing enormous engineering insights into the characteristics of water quality variation and constituent reactions. However, a water quality model is a reliable tool only if it predicts what a real system behaves. This paper presents a methodology that enables a modeler to efficiently calibrate a water quality model such that the field observed water quality values match with the model simulated values. The method is formulated to adjust the global water quality parameters and also the element-dependent water quality reaction rates for pipelines and tank storages. A genetic algorithm is applied to optimize the model parameters by minimizing the difference between the model-predicted values and the field-observed values. It is seamlessly integrated with a well-developed hydraulic and water quality modeling system. The approach has provided a generic tool and methodology for engineers to construct the sound water quality model in expedient manner. The method is applied to a real water system and demonstrated that a water quality model can be optimized for managing adequate water supply to public communities. PMID:16854809
Optimizing experimental design for comparing models of brain function.
Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas
2011-11-01
This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485
Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.
Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina
2016-08-25
The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course. PMID:27150459
Optimization and analysis of a CFJ-airfoil using adaptive meta-model based design optimization
NASA Astrophysics Data System (ADS)
Whitlock, Michael D.
Although strong potential for Co-Flow Jet (CFJ) flow separation control system has been demonstrated in existing literature, there has been little effort applied towards the optimization of the design for a given application. The high dimensional design space makes any optimization computationally intensive. This work presents the optimization of a CFJ airfoil as applied to a low Reynolds Number regimen using meta-model based design optimization (MBDO). The approach consists of computational fluid dynamics (CFD) analysis coupled with a surrogate model derived using Kriging. A genetic algorithm (GA) is then used to perform optimization on the efficient surrogate model. MBDO was shown to be an effective and efficient approach to solving the CFJ design problem. The final solution set was found to decrease drag by 100% while increasing lift by 42%. When validated, the final solution was found to be within one standard deviation of the CFD model it was representing.
Block-oriented modeling of superstructure optimization problems
Friedman, Z; Ingalls, J; Siirola, JD; Watson, JP
2013-10-15
We present a novel software framework for modeling large-scale engineered systems as mathematical optimization problems. A key motivating feature in such systems is their hierarchical, highly structured topology. Existing mathematical optimization modeling environments do not facilitate the natural expression and manipulation of hierarchically structured systems. Rather, the modeler is forced to "flatten" the system description, hiding structure that may be exploited by solvers, and obfuscating the system that the modeling environment is attempting to represent. To correct this deficiency, we propose a Python-based "block-oriented" modeling approach for representing the discrete components within the system. Our approach is an extension of the Pyomo library for specifying mathematical optimization problems. Through the use of a modeling components library, the block-oriented approach facilitates a clean separation of system superstructure from the details of individual components. This approach also naturally lends itself to expressing design and operational decisions as disjunctive expressions over the component blocks. By expressing a mathematical optimization problem in a block-oriented manner, inherent structure (e.g., multiple scenarios) is preserved for potential exploitation by solvers. In particular, we show that block-structured mathematical optimization problems can be straightforwardly manipulated by decomposition-based multi-scenario algorithmic strategies, specifically in the context of the PySP stochastic programming library. We illustrate our block-oriented modeling approach using a case study drawn from the electricity grid operations domain: unit commitment with transmission switching and N - 1 reliability constraints. Finally, we demonstrate that the overhead associated with block-oriented modeling only minimally increases model instantiation times, and need not adversely impact solver behavior. (C) 2013 Elsevier Ltd. All rights reserved.
A dynamic, optimal disease control model for foot-and-mouth disease: I. Model description.
Kobayashi, Mimako; Carpenter, Tim E; Dickey, Bradley F; Howitt, Richard E
2007-05-16
A dynamic optimization model was developed and used to evaluate alternative foot-and-mouth disease (FMD) control strategies. The model chose daily control strategies of depopulation and vaccination that minimized total regional cost for the entire epidemic duration, given disease dynamics and resource constraints. The disease dynamics and the impacts of control strategies on these dynamics were characterized in a set of difference equations; effects of movement restrictions on the disease dynamics were also considered. The model was applied to a three-county region in the Central Valley of California; the epidemic relationships were parameterized and validated using the information obtained from an FMD simulation model developed for the same region. The optimization model enables more efficient searches for desirable control strategies by considering all strategies simultaneously, providing the simulation model with optimization results to direct it in generating detailed predictions of potential FMD outbreaks. PMID:17280729
Optimal control of information epidemics modeled as Maki Thompson rumors
NASA Astrophysics Data System (ADS)
Kandhway, Kundan; Kuri, Joy
2014-12-01
We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.
Hydro- abrasive jet machining modeling for computer control and optimization
NASA Astrophysics Data System (ADS)
Groppetti, R.; Jovane, F.
1993-06-01
Use of hydro-abrasive jet machining (HAJM) for machining a wide variety of materials—metals, poly-mers, ceramics, fiber-reinforced composites, metal-matrix composites, and bonded or hybridized mate-rials—primarily for two- and three-dimensional cutting and also for drilling, turning, milling, and deburring, has been reported. However, the potential of this innovative process has not been explored fully. This article discusses process control, integration, and optimization of HAJM to establish a plat-form for the implementation of real-time adaptive control constraint (ACC), adaptive control optimiza-tion (ACO), and CAD/CAM integration. It presents the approach followed and the main results obtained during the development, implementation, automation, and integration of a HAJM cell and its computer-ized controller. After a critical analysis of the process variables and models reported in the literature to identify process variables and to define a process model suitable for HAJM real-time control and optimi-zation, to correlate process variables and parameters with machining results, and to avoid expensive and time-consuming experiments for determination of the optimal machining conditions, a process predic-tion and optimization model was identified and implemented. Then, the configuration of the HAJM cell, architecture, and multiprogramming operation of the controller in terms of monitoring, control, process result prediction, and process condition optimization were analyzed. This prediction and optimization model for selection of optimal machining conditions using multi-objective programming was analyzed. Based on the definition of an economy function and a productivity function, with suitable constraints relevant to required machining quality, required kerfing depth, and available resources, the model was applied to test cases based on experimental results.
Optimization models for flight test scheduling
NASA Astrophysics Data System (ADS)
Holian, Derreck
As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated
Computational modeling and optimization of proton exchange membrane fuel cells
NASA Astrophysics Data System (ADS)
Secanell Gallart, Marc
Improvements in performance, reliability and durability as well as reductions in production costs, remain critical prerequisites for the commercialization of proton exchange membrane fuel cells. In this thesis, a computational framework for fuel cell analysis and optimization is presented as an innovative alternative to the time consuming trial-and-error process currently used for fuel cell design. The framework is based on a two-dimensional through-the-channel isothermal, isobaric and single phase membrane electrode assembly (MEA) model. The model input parameters are the manufacturing parameters used to build the MEA: platinum loading, platinum to carbon ratio, electrolyte content and gas diffusion layer porosity. The governing equations of the fuel cell model are solved using Netwon's algorithm and an adaptive finite element method in order to achieve quadratic convergence and a mesh independent solution respectively. The analysis module is used to solve two optimization problems: (i) maximize performance; and, (ii) maximize performance while minimizing the production cost of the MEA. To solve these problems a gradient-based optimization algorithm is used in conjunction with analytical sensitivities. The presented computational framework is the first attempt in the literature to combine highly efficient analysis and optimization methods to perform optimization in order to tackle large-scale problems. The framework presented is capable of solving a complete MEA optimization problem with state-of-the-art electrode models in approximately 30 minutes. The optimization results show that it is possible to achieve Pt-specific power density for the optimized MEAs of 0.422 gPt/kW. This value is extremely close to the target of 0.4 gPt/kW for large-scale implementation and demonstrate the potential of using numerical optimization for fuel cell design.
Modelling complex terrain effects for wind farm layout optimization
NASA Astrophysics Data System (ADS)
Schmidt, Jonas; Stoevesandt, Bernhard
2014-06-01
The flow over four analytical hill geometries was calculated by CFD RANS simulations. For each hill, the results were converted into numerical models that transform arbitrary undisturbed inflow profiles by rescaling the effect of the obstacle. The predictions of such models are compared to full CFD results, first for atmospheric boundary layer flow, and then for a single turbine wake in the presence of an isolated hill. The implementation of the models into the wind farm modelling software flapFOAM is reported, advancing their inclusion into a fully modular wind farm layout optimization routine.
A dynamic optimization model for solid waste recycling.
Anghinolfi, Davide; Paolucci, Massimo; Robba, Michela; Taramasso, Angela Celeste
2013-02-01
Recycling is an important part of waste management (that includes different kinds of issues: environmental, technological, economic, legislative, social, etc.). Differently from many works in literature, this paper is focused on recycling management and on the dynamic optimization of materials collection. The developed dynamic decision model is characterized by state variables, corresponding to the quantity of waste in each bin per each day, and control variables determining the quantity of material that is collected in the area each day and the routes for collecting vehicles. The objective function minimizes the sum of costs minus benefits. The developed decision model is integrated in a GIS-based Decision Support System (DSS). A case study related to the Cogoleto municipality is presented to show the effectiveness of the proposed model. From optimal results, it has been found that the net benefits of the optimized collection are about 2.5 times greater than the estimated current policy. PMID:23158873
The effect of model uncertainty on some optimal routing problems
NASA Technical Reports Server (NTRS)
Mohanty, Bibhu; Cassandras, Christos G.
1991-01-01
The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Model-Based Optimization for Flapping Foil Actuation
NASA Astrophysics Data System (ADS)
Izraelevitz, Jacob; Triantafyllou, Michael
2014-11-01
Flapping foil actuation in nature, such as wings and flippers, often consist of highly complex joint kinematics which present an impossibly large parameter space for designing bioinspired mechanisms. Designers therefore often build a simplified model to limit the parameter space so an optimum motion trajectory can be experimentally found, or attempt to replicate exactly the joint geometry and kinematics of a suitable organism whose behavior is assumed to be optimal. We present a compromise: using a simple local fluids model to guide the design of optimized trajectories through a succession of experimental trials, even when the parameter space is too large to effectively search. As an example, we illustrate an optimization routine capable of designing asymmetric flapping trajectories for a large aspect-ratio pitching and heaving foil, with the added degree of freedom of allowing the foil to move parallel to flow. We then present PIV flow visualizations of the optimized trajectories.
Strategies for Model Reduction: Comparing Different Optimal Bases.
NASA Astrophysics Data System (ADS)
Crommelin, D. T.; Majda, A. J.
2004-09-01
Several different ways of constructing optimal bases for efficient dynamical modeling are compared: empirical orthogonal functions (EOFs), optimal persistence patterns (OPPs), and principal interaction patterns (PIPs). Past studies on fluid-dynamical topics have pointed out that EOF-based models can have difficulties reproducing behavior dominated by irregular transitions between different dynamical states. This issue is addressed in a geophysical context, by assessing the ability of these strategies for efficient dynamical modeling to reproduce the chaotic regime transitions in a simple atmosphere model. The atmosphere model is the well-known Charney DeVore model, a six-dimensional truncation of the equations describing barotropic flow over topography in a β-plane channel geometry. This model is able to generate regime transitions for well-chosen parameter settings. The models based on PIPs are found to be superior to the EOF- and OPP-based models, in spite of some undesirable sensitivities inherent to the PIP method.
Optimal control of vaccine distribution in a rabies metapopulation model.
Asano, Erika; Gross, Louis J; Lenhart, Suzanne; Real, Leslie A
2008-04-01
We consider an SIR metapopulation model for the spread of rabies in raccoons. This system of ordinary differential equations considers subpopulations connected by movement. Vaccine for raccoons is distributed through food baits. We apply optimal control theory to find the best timing for distribution of vaccine in each of the linked subpopulations across the landscape. This strategy is chosen to limit the disease optimally by making the number of infections as small as possible while accounting for the cost of vaccination. PMID:18613731
The optimal inventory policy for EPQ model under trade credit
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2010-09-01
Huang and Huang [(2008), 'Optimal Inventory Replenishment Policy for the EPQ Model Under Trade Credit without Derivatives International Journal of Systems Science, 39, 539-546] use the algebraic method to determine the optimal inventory replenishment policy for the retailer in the extended model under trade credit. However, the algebraic method has its limit of application such that validities of proofs of Theorems 1-4 in Huang and Huang (2008) are questionable. The main purpose of this article is not only to indicate shortcomings but also to present the accurate proofs for Huang and Huang (2008).
Multi-objective parameter optimization of common land model using adaptive surrogate modeling
NASA Astrophysics Data System (ADS)
Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.
2015-05-01
Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20 to 100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) Regional LSMs are expensive to run, while conventional multi-objective optimization methods need a large number of model runs (typically ~105-106). It makes parameter optimization computationally prohibitive. An uncertainty quantification framework was developed to meet the aforementioned challenges, which include the following steps: (1) using parameter screening to reduce the number of adjustable parameters, (2) using surrogate models to emulate the responses of dynamic models to the variation of adjustable parameters, (3) using an adaptive strategy to improve the efficiency of surrogate modeling-based optimization; (4) using a weighting function to transfer multi-objective optimization to single-objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column application of a LSM - the Common Land Model (CoLM), and evaluate the effectiveness and efficiency of the proposed framework. The result indicate that this framework can efficiently achieve optimal parameters in a more effective way. Moreover, this result implies the possibility of calibrating other large complex dynamic models, such as regional-scale LSMs, atmospheric models and climate models.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.; Jacobsen, S. E.
1986-01-01
An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.
Shell model of optimal passive-scalar mixing
NASA Astrophysics Data System (ADS)
Miles, Christopher; Doering, Charles
2015-11-01
Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.
Data visualization optimization via computational modeling of perception.
Pineo, Daniel; Ware, Colin
2012-02-01
We present a method for automatically evaluating and optimizing visualizations using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and primary visual cortex. The neural activity resulting from viewing flow visualizations is simulated and evaluated to produce a metric of visualization effectiveness. Visualization optimization is achieved by applying this effectiveness metric as the utility function in a hill-climbing algorithm. We apply this method to the evaluation and optimization of 2D flow visualizations, using two visualization parameterizations: streaklet-based and pixel-based. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment. It had been previously hypothesized the effectiveness of head-to-tail alignment results from the perceptual processing of the visual system, but this theory had not been computationally modeled. A second optimization using a pixel-based parameterization resulted in a LIC-like result. The implications in terms of the selection of primitives is discussed. We argue that computational models can be used for optimizing complex visualizations. In addition, we argue that they can provide a means of computationally evaluating perceptual theories of visualization, and as a method for quality control of display methods. PMID:21383402
Plasma jet accelerator optimization with supple membrane model
NASA Astrophysics Data System (ADS)
Galkin, S. A.; Bogatu, I. N.; Kim, J. S.
2006-10-01
High density (>=3x10^17cm-3) and high Mach number (M>10) plasma jets have important applications such as plasma rotation, refueling and disruption mitigation in tokamaks. The most deleterious blow-by instability occurs in coaxial plasma accelerators; hence electrode shape optimization is required to accelerate plasmas to ˜200 km/s [1]. A full 3D particle simulation takes a huge computational time. We have developed a membrane model to provide a good starting point and further physical insight for a full 3D optimization. Our model approximates the axisymmetrical plasma by a thin supple conducting membrane with a distributed mass, located between the electrodes, and connects them to model dynamics of the blow-by instability and to conduct the optimization. The supple membrane is allowed to slip along the conductors freely or with some friction as affected by Lorenz force, generated by magnetic field inside the chamber and current on membrane. The total mass and the density distribution represent the initial plasma. The density is redistributed adiabatically during the acceleration. An external electrical circuit with capacitance, inductance and resistivity is a part of the model. The membrane model simulation results will be compared to the 2D fluid MACH2 results and then will be used to guide a full 3D optimization by the LSP code. 1. http://hyperv.com/projects/pic/
Applied topology optimization of vibro-acoustic hearing instrument models
NASA Astrophysics Data System (ADS)
Søndergaard, Morten Birkmose; Pedersen, Claus B. W.
2014-02-01
Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.
Time dependent optimal switching controls in online selling models
Bradonjic, Milan; Cohen, Albert
2010-01-01
We present a method to incorporate dishonesty in online selling via a stochastic optimal control problem. In our framework, the seller wishes to maximize her average wealth level W at a fixed time T of her choosing. The corresponding Hamilton-Jacobi-Bellmann (HJB) equation is analyzed for a basic case. For more general models, the admissible control set is restricted to a jump process that switches between extreme values. We propose a new approach, where the optimal control problem is reduced to a multivariable optimization problem.
Pumping Optimization Model for Pump and Treat Systems - 15091
Baker, S.; Ivarson, Kristine A.; Karanovic, M.; Miller, Charles W.; Tonkin, M.
2015-01-15
Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It produces predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.
Aeroelastic Optimization Study Based on X-56A Model
NASA Technical Reports Server (NTRS)
Li, Wesley; Pak, Chan-Gi
2014-01-01
A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
Rogers, Adam; Fiege, Jason D.
2011-02-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration
Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.
ERIC Educational Resources Information Center
Belgard, Maria R.; Min, Leo Yoon-Gee
An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…
Geometry Modeling and Grid Generation for Design and Optimization
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.
Verifying and Validating Proposed Models for FSW Process Optimization
NASA Technical Reports Server (NTRS)
Schneider, Judith
2008-01-01
This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms
Hyperopt: a Python library for model selection and hyperparameter optimization
NASA Astrophysics Data System (ADS)
Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.
2015-01-01
Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.
Electrochemical model based charge optimization for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Pramanik, Sourav; Anwar, Sohel
2016-05-01
In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.
Optimal Control of a Dengue Epidemic Model with Vaccination
NASA Astrophysics Data System (ADS)
Rodrigues, Helena Sofia; Teresa, M.; Monteiro, T.; Torres, Delfim F. M.
2011-09-01
We present a SIR+ASI epidemic model to describe the interaction between human and dengue fever mosquito populations. A control strategy in the form of vaccination, to decrease the number of infected individuals, is used. An optimal control approach is applied in order to find the best way to fight the disease.
Metabolic engineering with multi-objective optimization of kinetic models.
Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Balsa-Canto, Eva; Banga, Julio R
2016-03-20
Kinetic models have a great potential for metabolic engineering applications. They can be used for testing which genetic and regulatory modifications can increase the production of metabolites of interest, while simultaneously monitoring other key functions of the host organism. This work presents a methodology for increasing productivity in biotechnological processes exploiting dynamic models. It uses multi-objective dynamic optimization to identify the combination of targets (enzymatic modifications) and the degree of up- or down-regulation that must be performed in order to optimize a set of pre-defined performance metrics subject to process constraints. The capabilities of the approach are demonstrated on a realistic and computationally challenging application: a large-scale metabolic model of Chinese Hamster Ovary cells (CHO), which are used for antibody production in a fed-batch process. The proposed methodology manages to provide a sustained and robust growth in CHO cells, increasing productivity while simultaneously increasing biomass production, product titer, and keeping the concentrations of lactate and ammonia at low values. The approach presented here can be used for optimizing metabolic models by finding the best combination of targets and their optimal level of up/down-regulation. Furthermore, it can accommodate additional trade-offs and constraints with great flexibility. PMID:26826510
Discover for Yourself: An Optimal Control Model in Insect Colonies
ERIC Educational Resources Information Center
Winkel, Brian
2013-01-01
We describe the enlightening path of self-discovery afforded to the teacher of undergraduate mathematics. This is demonstrated as we find and develop background material on an application of optimal control theory to model the evolutionary strategy of an insect colony to produce the maximum number of queen or reproducer insects in the colony at…
Review of Optimization Methods in Groundwater Modeling and Management
NASA Astrophysics Data System (ADS)
Yeh, W. W.
2001-12-01
This paper surveys nonlinear optimization methods developed for groundwater modeling and management. The first part reviews algorithms used for model calibration, that is, the inverse problem of parameter estimation. In recent years, groundwater models are combined with optimization models to identify the best management alternatives. Once the objectives and constraints are specified, most problems lend themselves to solution techniques developed in operations research, optimal control, and combinatorial optimization. The second part reviews methods developed for groundwater management. Algorithms and methods reviewed include quadratic programming, differential dynamic programming, nonlinear programming, mixed integer programming, stochastic programming, and non-gradient-based search algorithms. Advantages and drawbacks associated with each approach are discussed. A recent tendency has been toward combining the gradient-based algorithms with the non-gradient-based search algorithms, in that, a non-gradient-based search algorithm is used to identify a near optimum solution and a gradient-based algorithm uses the near optimum solution as its initial estimate for rapid convergence.
Analytical models integrated with satellite images for optimized pest management
Technology Transfer Automated Retrieval System (TEKTRAN)
The global field protection (GFP) was developed to protect and optimize pest management resources integrating satellite images for precise field demarcation with physical models of controlled release devices of pesticides to protect large fields. The GFP was implemented using a graphical user interf...
To the optimization problem in minority game model
Yanishevsky, Vasyl
2009-12-14
The article presents the research results of the optimization problem in minority game model to a gaussian approximation using replica symmetry breaking by one step (1RSB). A comparison to replica symmetry approximation (RS) and the results from literary sources received using other methods has been held.
Water-resources optimization model for Santa Barbara, California
Nishikawa, T.
1998-01-01
A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
NASA Astrophysics Data System (ADS)
Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu
2016-03-01
Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.
Optimal thermalization in a shell model of homogeneous turbulence
NASA Astrophysics Data System (ADS)
Thalabard, Simon; Turkington, Bruce
2016-04-01
We investigate the turbulence-induced dissipation of the large scales in a statistically homogeneous flow using an ‘optimal closure,’ which one of us (BT) has recently exposed in the context of Hamiltonian dynamics. This statistical closure employs a Gaussian model for the turbulent scales, with corresponding vanishing third cumulant, and yet it captures an intrinsic damping. The key to this apparent paradox lies in a clear distinction between true ensemble averages and their proxies, most easily grasped when one works directly with the Liouville equation rather than the cumulant hierarchy. We focus on a simple problem for which the optimal closure can be fully and exactly worked out: the relaxation arbitrarily far-from-equilibrium of a single energy shell towards Gibbs equilibrium in an inviscid shell model of 3D turbulence. The predictions of the optimal closure are validated against DNS and contrasted with those derived from EDQNM closure.
Modeling of biological intelligence for SCM system optimization.
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724
Modeling of Biological Intelligence for SCM System Optimization
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724
Asymmetric optimal-velocity car-following model
NASA Astrophysics Data System (ADS)
Xu, Xihua; Pang, John; Monterola, Christopher
2015-10-01
Taking the asymmetric characteristic of the velocity differences of vehicles into account, we present an asymmetric optimal velocity model for a car-following theory. The asymmetry between the acceleration and the deceleration is represented by the exponential function with an asymmetrical factor, which agrees with the published experiment. This model avoids the disadvantage of the unrealistically high acceleration appearing in previous models when the velocity difference becomes large. This model is simple and only has two independent parameters. The linear stability condition is derived and the phase transition of the traffic flow appears beyond the critical density. The strength of interaction between clusters is shown to increase with the asymmetry factor in our model.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
Optimal control in a model of malaria with differential susceptibility
NASA Astrophysics Data System (ADS)
Hincapié, Doracelly; Ospina, Juan
2014-06-01
A malaria model with differential susceptibility is analyzed using the optimal control technique. In the model the human population is classified as susceptible, infected and recovered. Susceptibility is assumed dependent on genetic, physiological, or social characteristics that vary between individuals. The model is described by a system of differential equations that relate the human and vector populations, so that the infection is transmitted to humans by vectors, and the infection is transmitted to vectors by humans. The model considered is analyzed using the optimal control method when the control consists in using of insecticide-treated nets and educational campaigns; and the optimality criterion is to minimize the number of infected humans, while keeping the cost as low as is possible. One first goal is to determine the effects of differential susceptibility in the proposed control mechanism; and the second goal is to determine the algebraic form of the basic reproductive number of the model. All computations are performed using computer algebra, specifically Maple. It is claimed that the analytical results obtained are important for the design and implementation of control measures for malaria. It is suggested some future investigations such as the application of the method to other vector-borne diseases such as dengue or yellow fever; and also it is suggested the possible application of free software of computer algebra like Maxima.
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides
NASA Astrophysics Data System (ADS)
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Simulation/optimization modeling for robust pumping strategy design.
Kalwij, Ineke M; Peralta, Richard C
2006-01-01
A new simulation/optimization modeling approach is presented for addressing uncertain knowledge of aquifer parameters. The Robustness Enhancing Optimizer (REO) couples genetic algorithm and tabu search as optimizers and incorporates aquifer parameter sensitivity analysis to guide multiple-realization optimization. The REO maximizes strategy robustness for a pumping strategy that is optimal for a primary objective function (OF), such as cost. The more robust a strategy, the more likely it is to achieve management goals in the field, even if the physical system differs from the model. The REO is applied to trinitrotoluene and Royal Demolition Explosive plumes at Umatilla Chemical Depot in Oregon to develop robust least cost strategies. The REO efficiently develops robust pumping strategies while maintaining the optimal value of the primary OF-differing from the common situation in which a primary OF value degrades as strategy reliability increases. The REO is especially valuable where data to develop realistic probability density functions (PDFs) or statistically derived realizations are unavailable. Because they require much less field data, REO-developed strategies might not achieve as high a mathematical reliability as strategies developed using many realizations based upon real aquifer parameter PDFs. REO-developed strategies might or might not yield a better OF value in the field. PMID:16857035
Aeroelastic Optimization Study Based on the X-56A Model
NASA Technical Reports Server (NTRS)
Li, Wesley W.; Pak, Chan-Gi
2014-01-01
One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.
Optimal uncertainty quantification with model uncertainty and legacy data
NASA Astrophysics Data System (ADS)
Kamga, P.-H. T.; Li, B.; McKerns, M.; Nguyen, L. H.; Ortiz, M.; Owhadi, H.; Sullivan, T. J.
2014-12-01
We present an optimal uncertainty quantification (OUQ) protocol for systems that are characterized by an existing physics-based model and for which only legacy data is available, i.e., no additional experimental testing of the system is possible. Specifically, the OUQ strategy developed in this work consists of using the legacy data to establish, in a probabilistic sense, the level of error of the model, or modeling error, and to subsequently use the validated model as a basis for the determination of probabilities of outcomes. The quantification of modeling uncertainty specifically establishes, to a specified confidence, the probability that the actual response of the system lies within a certain distance of the model. Once the extent of model uncertainty has been established in this manner, the model can be conveniently used to stand in for the actual or empirical response of the system in order to compute probabilities of outcomes. To this end, we resort to the OUQ reduction theorem of Owhadi et al. (2013) in order to reduce the computation of optimal upper and lower bounds on probabilities of outcomes to a finite-dimensional optimization problem. We illustrate the resulting UQ protocol by means of an application concerned with the response to hypervelocity impact of 6061-T6 Aluminum plates by Nylon 6/6 impactors at impact velocities in the range of 5-7 km/s. The ability of the legacy OUQ protocol to process diverse information on the system and its ability to supply rigorous bounds on system performance under realistic-and less than ideal-scenarios demonstrated by the hypervelocity impact application is remarkable.
A Simple Model of Optimal Population Coding for Sensory Systems
Doi, Eizaburo; Lewicki, Michael S.
2014-01-01
A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery. PMID:25121492
Health benefit modelling and optimization of vehicular pollution control strategies
NASA Astrophysics Data System (ADS)
Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra
2012-12-01
This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific
A model for HIV/AIDS pandemic with optimal control
NASA Astrophysics Data System (ADS)
Sule, Amiru; Abdullah, Farah Aini
2015-05-01
Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.
NASA Astrophysics Data System (ADS)
Pham, H. V.; Tsai, F. T. C.
2014-12-01
Groundwater systems are complex and subject to multiple interpretations and conceptualizations due to a lack of sufficient information. As a result, multiple conceptual models are often developed and their mean predictions are preferably used to avoid biased predictions from using a single conceptual model. Yet considering too many conceptual models may lead to high prediction uncertainty and may lose the purpose of model development. In order to reduce the number of models, an optimal observation network design is proposed based on maximizing the Kullback-Leibler (KL) information to discriminate competing models. The KL discrimination function derived by Box and Hill [1967] for one additional observation datum at a time is expanded to account for multiple independent spatiotemporal observations. The Bayesian model averaging (BMA) method is used to incorporate existing data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. To consider the future observation uncertainty, the Monte Carlo realizations of BMA predicted future observations are used to calculate the mean and variance of posterior model probabilities of the competing models. The goal of the optimal observation network design is to find the number and location of observation wells and sampling rounds such that the highest posterior model probability of a model is larger than a desired probability criterion (e.g., 95%). The optimal observation network design is implemented to a groundwater study in the Baton Rouge area, Louisiana to collect new groundwater heads from USGS wells. The considered sources of uncertainty that create multiple groundwater models are the geological architecture, the boundary condition, and the fault permeability architecture. All possible design solutions are enumerated using high performance computing systems. Results show that total model variance (the sum of within-model variance and between-model
Linear versus quadratic portfolio optimization model with transaction cost
NASA Astrophysics Data System (ADS)
Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah
2014-06-01
Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.
Parameter optimization in differential geometry based solvation models.
Wang, Bao; Wei, G W
2015-10-01
Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304
The PDB_REDO server for macromolecular structure model optimization
Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis
2014-01-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallographers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342
Optimal model-free prediction from multivariate time series.
Runge, Jakob; Donner, Reik V; Kurths, Jürgen
2015-05-01
Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation. PMID:26066231
Modeling and Optimizing Space Networks for Improved Communication Capacity
NASA Astrophysics Data System (ADS)
Spangelo, Sara C.
There are a growing number of individual and constellation small satellite missions seeking to download large quantities of science, observation, and surveillance data. The existing ground station infrastructure to support these missions constrains the potential data throughput because the stations are low-cost, are not always available because they are independently owned and operated, and their ability to collect data is often inefficient. The constraints of the small satellite form factor (e.g. mass, size, power) coupled with the ground network limitations lead to significant operational and communication scheduling challenges. Faced with these challenges, our goal is to maximize capacity, defined as the amount of data that is successfully downloaded from space to ground communication nodes. In this thesis, we develop models, tools, and optimization algorithms for spacecraft and ground network operations. First, we develop an analytical modeling framework and a high-fidelity simulation environment that capture the interaction of on-board satellite energy and data dynamics, ground stations, and the external space environment. Second, we perform capacity-based assessments to identify excess and deficient resources for comparison to mission-specific requirements. Third, we formulate and solve communication scheduling problems that maximize communication capacity for a satellite downloading to a network of globally and functionally heterogeneous ground stations. Numeric examples demonstrate the applicability of the models and tools to assess and optimize real-world existing and upcoming small satellite mission scenarios that communicate to global ground station networks as well as generic communication scheduling problem instances. We study properties of optimal satellite communication schedules and sensitivity of communication capacity to various deterministic and stochastic satellite vehicle and network parameters. The models, tools, and optimization techniques we
A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment
NASA Astrophysics Data System (ADS)
Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir
2015-07-01
This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Optimal inference with suboptimal models: Addiction and active Bayesian inference
Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl
2015-01-01
When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321
Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-10-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more
Advanced Nuclear Fuel Cycle Transitions: Optimization, Modeling Choices, and Disruptions
NASA Astrophysics Data System (ADS)
Carlsen, Robert W.
Many nuclear fuel cycle simulators have evolved over time to help understan the nuclear industry/ecosystem at a macroscopic level. Cyclus is one of th first fuel cycle simulators to accommodate larger-scale analysis with it liberal open-source licensing and first-class Linux support. Cyclus also ha features that uniquely enable investigating the effects of modeling choices o fuel cycle simulators and scenarios. This work is divided into thre experiments focusing on optimization, effects of modeling choices, and fue cycle uncertainty. Effective optimization techniques are developed for automatically determinin desirable facility deployment schedules with Cyclus. A novel method fo mapping optimization variables to deployment schedules is developed. Thi allows relationships between reactor types and scenario constraints to b represented implicitly in the variable definitions enabling the usage o optimizers lacking constraint support. It also prevents wasting computationa resources evaluating infeasible deployment schedules. Deployed power capacit over time and deployment of non-reactor facilities are also included a optimization variables There are many fuel cycle simulators built with different combinations o modeling choices. Comparing results between them is often difficult. Cyclus flexibility allows comparing effects of many such modeling choices. Reacto refueling cycle synchronization and inter-facility competition among othe effects are compared in four cases each using combinations of fleet of individually modeled reactors with 1-month or 3-month time steps. There are noticeable differences in results for the different cases. The larges differences occur during periods of constrained reactor fuel availability This and similar work can help improve the quality of fuel cycle analysi generally There is significant uncertainty associated deploying new nuclear technologie such as time-frames for technology availability and the cost of buildin advanced reactors
An optimization model for long-range transmission expansion planning
Santos, A. Jr.; Franca, P.M.; Said, A.
1989-02-01
In this paper is presented a static network synthesis method applied to transmission expansion planning. The static synthesis problem is formulated as a mixed-integer network flow model that is solved by an implicit enumeration algorithm. This model considers as the objective function the most productive trade off, resulting in low investment costs and good electrical performance. The load and generation nodal equations are considered in the constraints of the model. The power transmission law of DC load flow is implicit in the optimization model. Results of computational tests are presented and they show the advantage of this method compared with a heuristic procedure. The case studies show a comparison of computational times and costs of solutions obtained for the Brazilian North-Northeast transmission system.
CPOPT : optimization for fitting CANDECOMP/PARAFAC models.
Dunlavy, Daniel M.; Kolda, Tamara Gibson; Acar, Evrim
2008-10-01
Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.
A mathematical model on the optimal timing of offspring desertion.
Seno, Hiromi; Endo, Hiromi
2007-06-01
We consider the offspring desertion as the optimal strategy for the deserter parent, analyzing a mathematical model for its expected reproductive success. It is shown that the optimality of the offspring desertion significantly depends on the offsprings' birth timing in the mating season, and on the other ecological parameters characterizing the innate nature of considered animals. Especially, the desertion is less likely to occur for the offsprings born in the later period of mating season. It is also implied that the offspring desertion after a partially biparental care would be observable only with a specific condition. PMID:17328918
Dynamic stochastic optimization models for air traffic flow management
NASA Astrophysics Data System (ADS)
Mukherjee, Avijit
This dissertation presents dynamic stochastic optimization models for Air Traffic Flow Management (ATFM) that enables decisions to adapt to new information on evolving capacities of National Airspace System (NAS) resources. Uncertainty is represented by a set of capacity scenarios, each depicting a particular time-varying capacity profile of NAS resources. We use the concept of a scenario tree in which multiple scenarios are possible initially. Scenarios are eliminated as possibilities in a succession of branching points, until the specific scenario that will be realized on a particular day is known. Thus the scenario tree branching provides updated information on evolving scenarios, and allows ATFM decisions to be re-addressed and revised. First, we propose a dynamic stochastic model for a single airport ground holding problem (SAGHP) that can be used for planning Ground Delay Programs (GDPs) when there is uncertainty about future airport arrival capacities. Ground delays of non-departed flights can be revised based on updated information from scenario tree branching. The problem is formulated so that a wide range of objective functions, including non-linear delay cost functions and functions that reflect equity concerns can be optimized. Furthermore, the model improves on existing practice by ensuring efficient use of available capacity without necessarily exempting long-haul flights. Following this, we present a methodology and optimization models that can be used for decentralized decision making by individual airlines in the GDP planning process, using the solutions from the stochastic dynamic SAGHP. Airlines are allowed to perform cancellations, and re-allocate slots to remaining flights by substitutions. We also present an optimization model that can be used by the FAA, after the airlines perform cancellation and substitutions, to re-utilize vacant arrival slots that are created due to cancellations. Finally, we present three stochastic integer programming
Fabrication, modeling and optimization of an ionic polymer gel actuator
NASA Astrophysics Data System (ADS)
Jo, Choonghee; Naguib, Hani E.; Kwon, Roy H.
2011-04-01
The modeling of the electro-active behavior of ionic polymer gel is studied and the optimum conditions that maximize the deflection of the gel are investigated. The bending deformation of polymer gel under an electric field is formulated by using chemo-electro-mechanical parameters. In the modeling, swelling and shrinking phenomena due to the differences in ion concentration at the boundary between the gel and solution are considered prior to the application of an electric field, and then bending actuation is applied. As the driving force of swelling, shrinking and bending deformation, differential osmotic pressure at the boundary of the gel and solution is considered. From this behavior, the strain or deflection of the gel is calculated. To find the optimum design parameter settings (electric voltage, thickness of gel, concentration of polyion in the gel, ion concentration in the solution, and degree of cross-linking in the gel) for bending deformation, a nonlinear constrained optimization model is formulated. In the optimization model, a bending deflection equation of the gel is used as an objective function, and a range of decision variables and their relationships are used as constraint equations. Also, actuation experiments are conducted using poly(2-acrylamido-2-methylpropane sulfonic acid) (PAMPS) gel and the optimum conditions predicted by the proposed model have been verified by the experiments.
Discrete-Time ARMAv Model-Based Optimal Sensor Placement
Song Wei; Dyke, Shirley J.
2008-07-08
This paper concentrates on the optimal sensor placement problem in ambient vibration based structural health monitoring. More specifically, the paper examines the covariance of estimated parameters during system identification using auto-regressive and moving average vector (ARMAv) model. By utilizing the discrete-time steady state Kalman filter, this paper realizes the structure's finite element (FE) model under broad-band white noise excitations using an ARMAv model. Based on the asymptotic distribution of the parameter estimates of the ARMAv model, both a theoretical closed form and a numerical estimate form of the covariance of the estimates are obtained. Introducing the information entropy (differential entropy) measure, as well as various matrix norms, this paper attempts to find a reasonable measure to the uncertainties embedded in the ARMAv model estimates. Thus, it is possible to select the optimal sensor placement that would lead to the smallest uncertainties during the ARMAv identification process. Two numerical examples are provided to demonstrate the methodology and compare the sensor placement results upon various measures.
Optimization of wind farm performance using low-order models
NASA Astrophysics Data System (ADS)
Dabiri, John; Brownstein, Ian
2015-11-01
A low order model that captures the dominant flow behaviors in a vertical-axis wind turbine (VAWT) array is used to maximize the power output of wind farms utilizing VAWTs. The leaky Rankine body model (LRB) was shown by Araya et al. (JRSE 2014) to predict the ranking of individual turbine performances in an array to within measurement uncertainty as compared to field data collected from full-scale VAWTs. Further, this model is able to predict array performance with significantly less computational expense than higher fidelity numerical simulations of the flow, making it ideal for use in optimization of wind farm performance. This presentation will explore the ability of the LRB model to rank the relative power output of different wind turbine array configurations as well as the ranking of individual array performance over a variety of wind directions, using various complex configurations tested in the field and simpler configurations tested in a wind tunnel. Results will be presented in which the model is used to determine array fitness in an evolutionary algorithm seeking to find optimal array configurations given a number of turbines, area of available land, and site wind direction profile. Comparison with field measurements will be presented.
Modeling and optimization of defense high level waste removal sequencing
NASA Astrophysics Data System (ADS)
Paul, Pran Krishna
A novel methodology has been developed which makes possible a very fast running computational tool, capable of performing 30 to 50 years of simulation of the entire Savannah River Site (SRS) high level waste complex in less than 2 minutes on a work station. The methodology has been implemented in the Production Planning Model (ProdMod) simulation code which uses Aspen Technology's dynamic simulation software development package SPEEDUP. ProdMod is a pseudo-dynamic simulation code solely based on algebraic equations, using no differential equations. The dynamic nature of the plant process is captured using linear constructs in which the time dependence is implicit. Another innovative approach implemented in ProdMod development is the mapping of event-space on to time-space and vice versa, which accelerates the computation without sacrificing the necessary details in the event-space. ProdMod uses this approach in coupling the time-space continuous simulation with the event-space batch simulation, avoiding the discontinuities inherent in dynamic simulation batch processing. In addition, a general purpose optimization scheme has been devised based on the pseudo-dynamic constructs and the event- and time-space algorithms of ProdMod. The optimization scheme couples a FORTRAN based stand-alone optimization driver with the SPEEDUP based ProdMod simulator to perform dynamic optimization. The scheme is capable of generating single or multiple optimal input conditions for different types of objective functions over single or multiple years of operations depending on the nature of the objective function and operating constraints. The resultant optimal inputs are then interfaced with ProdMod to simulate the dynamic behavior of the waste processing operations. At the conclusion on an optimized advancement step, the simulation parameters are then passed to the optimization driver to generate the next set of optimized parameters. An optimization algorithm using linear programming
Rapid Modeling, Assembly and Simulation in Design Optimization
NASA Technical Reports Server (NTRS)
Housner, Jerry
1997-01-01
A new capability for design is reviewed. This capability provides for rapid assembly of detail finite element models early in the design process where costs are most effectively impacted. This creates an engineering environment which enables comprehensive analysis and design optimization early in the design process. Graphical interactive computing makes it possible for the engineer to interact with the design while performing comprehensive design studies. This rapid assembly capability is enabled by the use of Interface Technology, to couple independently created models which can be archived and made accessible to the designer. Results are presented to demonstrate the capability.
Robust model predictive control for optimal continuous drug administration.
Sopasakis, Pantelis; Patrinos, Panagiotis; Sarimveis, Haralambos
2014-10-01
In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements. PMID:24986530
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Bader, Jon B.
2009-01-01
Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Ozonation optimization and modeling for treating diesel-contaminated water.
Ziabari, Seyedeh-Somayeh Haghighat; Khezri, Seyed-Mostafa; Kalantary, Roshanak Rezaei
2016-03-15
The effect of ozonation on treatment of diesel-contaminated water was investigated on a laboratory scale. Factorial design and response surface methodology (RSM) were used to evaluate and optimize the effects of pH, ozone flow rate, and contact time on the treatment process. A Box-Behnken design was successfully applied for modeling and optimizing the removal of total petroleum hydrocarbons (TPHs). The results showed that ozonation is an efficient technique for removing diesel from aqueous solution. The determination coefficient (R(2)) was found to be 0.9437, indicating that the proposed model was capable of predicting the removal of TPHs by ozonation. The optimum values of experimental initial pH, degree of O3, and reaction time were 7.0, 1.5, and 35 min, respectively, which could contribute to approximately 60% of TPH removal. This result is in good agreement with the predicted value of 57.28%. PMID:26846995
[Optimized models of logging-tending system in cutting areas].
Guo, J; Jing, Y; Zhang, R; Xiong, W; Su, J
2000-12-01
The comprehensive advantages of different logging-tending systems in Pinus massoniana forest cutting area were evaluated by set-pair analysis, based on the comparison of their economic and ecological benefits. The results showed that the optimized model for P. massoniana forests in Northern Fujian comprised 40% selective cutting, manual skidding, clear-cutting in ribbon, and natural regeneration with artificial aids, which could also be used in the nearby forests with conditions similar to the experimental area. PMID:11767550
Modeling Microinverters and DC Power Optimizers in PVWatts
MacAlpine, S.; Deline, C.
2015-02-01
Module-level distributed power electronics including microinverters and DC power optimizers are increasingly popular in residential and commercial PV systems. Consumers are realizing their potential to increase design flexibility, monitor system performance, and improve energy capture. It is becoming increasingly important to accurately model PV systems employing these devices. This document summarizes existing published documents to provide uniform, impartial recommendations for how the performance of distributed power electronics can be reflected in NREL's PVWatts calculator (http://pvwatts.nrel.gov/).
Optimization in generalized linear models: A case study
NASA Astrophysics Data System (ADS)
Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina
2016-06-01
The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.
Optimization model of vaccination strategy for dengue transmission
NASA Astrophysics Data System (ADS)
Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.
2014-02-01
Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.
Mathematical model of the metal mould surface temperature optimization
Mlynek, Jaroslav Knobloch, Roman; Srb, Radek
2015-11-30
The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.
Mathematical model of the metal mould surface temperature optimization
NASA Astrophysics Data System (ADS)
Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek
2015-11-01
The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.
NASA Astrophysics Data System (ADS)
WöHling, Thomas; Vrugt, Jasper A.
2008-12-01
Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multiobjective optimization and Bayesian model averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multiobjective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM and used to generate four different model ensembles. These ensembles are postprocessed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multiobjective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.
Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors
Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine
2007-06-01
Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.
An optimization model for the US Air-Traffic System
NASA Technical Reports Server (NTRS)
Mulvey, J. M.
1986-01-01
A systematic approach for monitoring U.S. air traffic was developed in the context of system-wide planning and control. Towards this end, a network optimization model with nonlinear objectives was chosen as the central element in the planning/control system. The network representation was selected because: (1) it provides a comprehensive structure for depicting essential aspects of the air traffic system, (2) it can be solved efficiently for large scale problems, and (3) the design can be easily communicated to non-technical users through computer graphics. Briefly, the network planning models consider the flow of traffic through a graph as the basic structure. Nodes depict locations and time periods for either individual planes or for aggregated groups of airplanes. Arcs define variables as actual airplanes flying through space or as delays across time periods. As such, a special case of the network can be used to model the so called flow control problem. Due to the large number of interacting variables and the difficulty in subdividing the problem into relatively independent subproblems, an integrated model was designed which will depict the entire high level (above 29000 feet) jet route system for the 48 contiguous states in the U.S. As a first step in demonstrating the concept's feasibility a nonlinear risk/cost model was developed for the Indianapolis Airspace. The nonlinear network program --NLPNETG-- was employed in solving the resulting test cases. This optimization program uses the Truncated-Newton method (quadratic approximation) for determining the search direction at each iteration in the nonlinear algorithm. It was shown that aircraft could be re-routed in an optimal fashion whenever traffic congestion increased beyond an acceptable level, as measured by the nonlinear risk function.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters. PMID:27136791
Model reduction for chemical kinetics: An optimization approach
Petzold, L.; Zhu, W.
1999-04-01
The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.
Finite state aeroelastic model for use in rotor design optimization
NASA Technical Reports Server (NTRS)
He, Chengjian; Peters, David A.
1993-01-01
In this article, a rotor aeroelastic model based on a newly developed finite state dynamic wake, coupled with blade finite element analysis, is described. The analysis is intended for application in rotor blade design optimization. A coupled simultaneous system of differential equations combining blade structural dynamics and aerodynamics is established in a formulation well-suited for design sensitivity computation. Each blade is assumed to be an elastic beam undergoing flap bending, lead-lag bending, elastic twist, and axial deflections. Aerodynamic loads are computed from unsteady blade element theory where the rotor three-dimensional unsteady wake is described by a generalized dynamic wake model. Correlation of results obtained from the analysis with flight test data is provided to assess model accuracy.
A comparison of motor submodels in the optimal control model
NASA Technical Reports Server (NTRS)
Lancraft, R. E.; Kleinman, D. L.
1978-01-01
Properties of several structural variations in the neuromotor interface portion of the optimal control model (OCM) are investigated. For example, it is known that commanding control-rate introduces an open-loop pole at S=O and will generate low frequency phase and magnitude characteristics similar to experimental data. However, this gives rise to unusually high sensitivities with respect to motor and sensor noise-ratios, thereby reducing the models' predictive capabilities. Relationships for different motor submodels are discussed to show sources of these sensitivities. The models investigated include both pseudo motor-noise and actual (system driving) motor-noise characterizations. The effects of explicit proprioceptive feedback in the OCM is also examined. To show graphically the effects of each submodel on system outputs, sensitivity studies are included, and compared to data obtained from other tests.
Optimal control of CPR procedure using hemodynamic circulation model
Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok
2007-12-25
A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.
A control model for dependable hydropower capacity optimization
NASA Astrophysics Data System (ADS)
Georgakakos, Aris P.; Yao, Huaming; Yu, Yongqing
In this article a control model that can be used to determine the dependable power capacity of a hydropower system is presented and tested. The model structure consists of a turbine load allocation module and a reservoir control module and allows for a detailed representation of hydroelectric facilities and various aspects of water management. Although this scheme is developed for planning purposes, it can also be used operationally with minor modifications. The model is applied to the Lanier-Allatoona-Carters reservoir system on the Chattahoochee and Coosa River Basins, in the southeastern United States. The case studies demonstrate that the more traditional simulation-based approaches often underestimate dependable power capacity. Firm energy optimization with or without dependable capacity constraints is taken up in a companion article [Georgakakos et al., this issue].
Multiobjective Optimization for Model Selection in Kernel Methods in Regression
You, Di; Benitez-Quiroz, C. Fabian; Martinez, Aleix M.
2016-01-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art. PMID:25291740
Parameter Optimization for the Gaussian Model of Folded Proteins
NASA Astrophysics Data System (ADS)
Erman, Burak; Erkip, Albert
2000-03-01
Recently, we proposed an analytical model of protein folding (B. Erman, K. A. Dill, J. Chem. Phys, 112, 000, 2000) and showed that this model successfully approximates the known minimum energy configurations of two dimensional HP chains. All attractions (covalent and non-covalent) as well as repulsions were treated as if the monomer units interacted with each other through linear spring forces. Since the governing potential of the linear springs are derived from a Gaussian potential, the model is called the ''Gaussian Model''. The predicted conformations from the model for the hexamer and various 9mer sequences all lie on the square lattice, although the model does not contain information about the lattice structure. Results of predictions for chains with 20 or more monomers also agreed well with corresponding known minimum energy lattice structures. However, these predicted conformations did not lie exactly on the square lattice. In the present work, we treat the specific problem of optimizing the potentials (the strengths of the spring constants) so that the predictions are in better agreement with the known minimum energy structures.
Multiobjective optimization for model selection in kernel methods in regression.
You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M
2014-10-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art. PMID:25291740
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
Automated Finite Element Modeling of Wing Structures for Shape Optimization
NASA Technical Reports Server (NTRS)
Harvey, Michael Stephen
1993-01-01
The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.
H2-optimal control with generalized state-space models for use in control-structure optimization
NASA Technical Reports Server (NTRS)
Wette, Matt
1991-01-01
Several advances are provided solving combined control-structure optimization problems. The author has extended solutions from H2 optimal control theory to the use of generalized state space models. The generalized state space models preserve the sparsity inherent in finite element models and hence provide some promise for handling very large problems. Also, expressions for the gradient of the optimal control cost are derived which use the generalized state space models.
Optimized diagnostic model combination for improving diagnostic accuracy
NASA Astrophysics Data System (ADS)
Kunche, S.; Chen, C.; Pecht, M. G.
Identifying the most suitable classifier for diagnostics is a challenging task. In addition to using domain expertise, a trial and error method has been widely used to identify the most suitable classifier. Classifier fusion can be used to overcome this challenge and it has been widely known to perform better than single classifier. Classifier fusion helps in overcoming the error due to inductive bias of various classifiers. The combination rule also plays a vital role in classifier fusion, and it has not been well studied which combination rules provide the best performance during classifier fusion. Good combination rules will achieve good generalizability while taking advantage of the diversity of the classifiers. In this work, we develop an approach for ensemble learning consisting of an optimized combination rule. The generalizability has been acknowledged to be a challenge for training a diverse set of classifiers, but it can be achieved by an optimal balance between bias and variance errors using the combination rule in this paper. Generalizability implies the ability of a classifier to learn the underlying model from the training data and to predict the unseen observations. In this paper, cross validation has been employed during performance evaluation of each classifier to get an unbiased performance estimate. An objective function is constructed and optimized based on the performance evaluation to achieve the optimal bias-variance balance. This function can be solved as a constrained nonlinear optimization problem. Sequential Quadratic Programming based optimization with better convergence property has been employed for the optimization. We have demonstrated the applicability of the algorithm by using support vector machine and neural networks as classifiers, but the methodology can be broadly applicable for combining other classifier algorithms as well. The method has been applied to the fault diagnosis of analog circuits. The performance of the proposed
Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...
WE-D-BRE-04: Modeling Optimal Concurrent Chemotherapy Schedules
Jeong, J; Deasy, J O
2014-06-15
Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-kill was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation.
Modeling and optimization of a hybrid solar combined cycle (HYCS)
NASA Astrophysics Data System (ADS)
Eter, Ahmad Adel
2011-12-01
The main objective of this thesis is to investigate the feasibility of integrating concentrated solar power (CSP) technology with the conventional combined cycle technology for electric generation in Saudi Arabia. The generated electricity can be used locally to meet the annual increasing demand. Specifically, it can be utilized to meet the demand during the hours 10 am-3 pm and prevent blackout hours, of some industrial sectors. The proposed CSP design gives flexibility in the operation system. Since, it works as a conventional combined cycle during night time and it switches to work as a hybrid solar combined cycle during day time. The first objective of the thesis is to develop a thermo-economical mathematical model that can simulate the performance of a hybrid solar-fossil fuel combined cycle. The second objective is to develop a computer simulation code that can solve the thermo-economical mathematical model using available software such as E.E.S. The developed simulation code is used to analyze the thermo-economic performance of different configurations of integrating the CSP with the conventional fossil fuel combined cycle to achieve the optimal integration configuration. This optimal integration configuration has been investigated further to achieve the optimal design of the solar field that gives the optimal solar share. Thermo-economical performance metrics which are available in the literature have been used in the present work to assess the thermo-economic performance of the investigated configurations. The economical and environmental impact of integration CSP with the conventional fossil fuel combined cycle are estimated and discussed. Finally, the optimal integration configuration is found to be solarization steam side in conventional combined cycle with solar multiple 0.38 which needs 29 hectare and LEC of HYCS is 63.17 $/MWh under Dhahran weather conditions.
Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami
2015-01-01
6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448
Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami
2015-01-01
6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448
Optimal aeroassisted coplanar orbital transfer using an energy model
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Taylor, Deborah B.
1989-01-01
The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.
Verification of immune response optimality through cybernetic modeling.
Batt, B C; Kompala, D S
1990-02-01
An immune response cascade that is T cell independent begins with the stimulation of virgin lymphocytes by antigen to differentiate into large lymphocytes. These immune cells can either replicate themselves or differentiate into plasma cells or memory cells. Plasma cells produce antibody at a specific rate up to two orders of magnitude greater than large lymphocytes. However, plasma cells have short life-spans and cannot replicate. Memory cells produce only surface antibody, but in the event of a subsequent infection by the same antigen, memory cells revert rapidly to large lymphocytes. Immunologic memory is maintained throughout the organism's lifetime. Many immunologists believe that the optimal response strategy calls for large lymphocytes to replicate first, then differentiate into plasma cells and when the antigen has been nearly eliminated, they form memory cells. A mathematical model incorporating the concept of cybernetics has been developed to study the optimality of the immune response. Derived from the matching law of microeconomics, cybernetic variables control the allocation of large lymphocytes to maximize the instantaneous antibody production rate at any time during the response in order to most efficiently inactivate the antigen. A mouse is selected as the model organism and bacteria as the replicating antigen. In addition to verifying the optimal switching strategy, results showing how the immune response is affected by antigen growth rate, initial antigen concentration, and the number of antibodies required to eliminate an antigen are included. PMID:2338827
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
Considerations for parameter optimization and sensitivity in climate models.
Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E
2010-12-14
Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841
Traveling waves in an optimal velocity model of freeway traffic.
Berg, P; Woods, A
2001-03-01
Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137]. PMID:11308709
Traveling waves in an optimal velocity model of freeway traffic
NASA Astrophysics Data System (ADS)
Berg, Peter; Woods, Andrew
2001-03-01
Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137].
Optimal control model of arm configuration in a reaching task
NASA Astrophysics Data System (ADS)
Yamaguchi, Gary T.; Kakavand, Ali
1996-05-01
It was hypothesized that the configuration of the upper limb during a hand static positioning task could be predicted using a dynamic musculoskeletal model and an optimal control routine. Both rhesus monkey and human upper extremity models were formulated, and had seven degrees of freedom (7-DOF) and 39 musculotendon pathways. A variety of configurations were generated about a physiologically measured configuration using the dynamic models and perturbations. The pseudoinverse optimal control method was applied to compute the minimum cost C at each of the generated configurations. Cost function C is described by the Crowninshield-Brand (1981) criterion which relates C (the sum of muscle stresses squared) to the endurance time of a physiological task. The configuration with the minimum cost was compared to the configurations chosen by one monkey (four trials) and by eight human subjects (eight trials each). Results are generally good, but not for all joint angles, suggesting that muscular effort is likely to be one major factor in choosing a preferred static arm posture.
Mutation Size Optimizes Speciation in an Evolutionary Model
Dees, Nathan D.; Bahar, Sonya
2010-01-01
The role of mutation rate in optimizing key features of evolutionary dynamics has recently been investigated in various computational models. Here, we address the related question of how maximum mutation size affects the formation of species in a simple computational evolutionary model. We find that the number of species is maximized for intermediate values of a mutation size parameter μ; the result is observed for evolving organisms on a randomly changing landscape as well as in a version of the model where negative feedback exists between the local population size and the fitness provided by the landscape. The same result is observed for various distributions of mutation values within the limits set by μ. When organisms with various values of μ compete against each other, those with intermediate μ values are found to survive. The surviving values of μ from these competition simulations, however, do not necessarily coincide with the values that maximize the number of species. These results suggest that various complex factors are involved in determining optimal mutation parameters for any population, and may also suggest approaches for building a computational bridge between the (micro) dynamics of mutations at the level of individual organisms and (macro) evolutionary dynamics at the species level. PMID:20689827
Biomechanical modeling and optimal control of human posture.
Menegaldo, Luciano Luporini; Fleury, Agenor de Toledo; Weber, Hans Ingo
2003-11-01
The present work describes the biomechanical modeling of human postural mechanics in the saggital plane and the use of optimal control to generate open-loop raising-up movements from a squatting position. The biomechanical model comprises 10 equivalent musculotendon actuators, based on a 40 muscles model, and three links (shank, thigh and HAT-Head, Arms and Trunk). Optimal control solutions are achieved through algorithms based on the Consistent Approximations Theory (Schwartz and Polak, 1996), where the continuous non-linear dynamics is represented in a discrete space by means of a Runge-Kutta integration and the control signals in a spline-coefficient functional space. This leads to non-linear programming problems solved by a sequential quadratic programming (SQP) method. Due to the highly non-linear and unstable nature of the posture dynamics, numerical convergence is difficult, and specific strategies must be implemented in order to allow convergence. Results for control (muscular excitations) and angular trajectories are shown using two final simulation times, as well as specific control strategies are discussed. PMID:14522212
Multi-model groundwater-management optimization: reconciling disparate conceptual models
NASA Astrophysics Data System (ADS)
Timani, Bassel; Peralta, Richard
2015-09-01
Disagreement among policymakers often involves policy issues and differences between the decision makers' implicit utility functions. Significant disagreement can also exist concerning conceptual models of the physical system. Disagreement on the validity of a single simulation model delays discussion on policy issues and prevents the adoption of consensus management strategies. For such a contentious situation, the proposed multi-conceptual model optimization (MCMO) can help stakeholders reach a compromise strategy. MCMO computes mathematically optimal strategies that simultaneously satisfy analogous constraints and bounds in multiple numerical models that differ in boundary conditions, hydrogeologic stratigraphy, and discretization. Shadow prices and trade-offs guide the process of refining the first MCMO-developed `multi-model strategy into a realistic compromise management strategy. By employing automated cycling, MCMO is practical for linear and nonlinear aquifer systems. In this reconnaissance study, MCMO application to the multilayer Cache Valley (Utah and Idaho, USA) river-aquifer system employs two simulation models with analogous background conditions but different vertical discretization and boundary conditions. The objective is to maximize additional safe pumping (beyond current pumping), subject to constraints on groundwater head and seepage from the aquifer to surface waters. MCMO application reveals that in order to protect the local ecosystem, increased groundwater pumping can satisfy only 40 % of projected water demand increase. To explore the possibility of increasing that pumping while protecting the ecosystem, MCMO clearly identifies localities requiring additional field data. MCMO is applicable to other areas and optimization problems than used here. Steps to prepare comparable sub-models for MCMO use are area-dependent.
Vibroacoustic optimization using a statistical energy analysis model
NASA Astrophysics Data System (ADS)
Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia
2016-08-01
In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.
Timber harvest planning a combined optimization/simulation model
Arthur, J.L.; Dykstra, D.P.
1980-11-01
A special cascading fixed charge model can be used to characterize a forest management planning problem in which the objectives are to identify the optimal shape of forest harvest cutting units and simultaneously to assign facilities for logging those units. A four-part methodology was developed to assist forest managers in analyzing areas proposed for harvesting. This methodology: analyzes harvesting feasibility; computes the optimal solution to the cascading fixed charge problem; undertakes a GASP IV simulation to provide additional information about the proposed harvesting operation; and permits the forest manager to perform a time-cost analysis that may lead to a more realistic, and thus improved, solution. (5 diagrams, 16 references, 3 tables)
Logit Model based Performance Analysis of an Optimization Algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. A.; Ospina, J. D.; Villada, D.
2011-09-01
In this paper, the performance of the Multi Dynamics Algorithm for Global Optimization (MAGO) is studied through simulation using five standard test functions. To guarantee that the algorithm converges to a global optimum, a set of experiments searching for the best combination between the only two MAGO parameters -number of iterations and number of potential solutions, are considered. These parameters are sequentially varied, while increasing the dimension of several test functions, and performance curves were obtained. The MAGO was originally designed to perform well with small populations; therefore, the self-adaptation task with small populations is more challenging while the problem dimension is higher. The results showed that the convergence probability to an optimal solution increases according to growing patterns of the number of iterations and the number of potential solutions. However, the success rates slow down when the dimension of the problem escalates. Logit Model is used to determine the mutual effects between the parameters of the algorithm.
Optimal boson energy for superconductivity in the Holstein model
NASA Astrophysics Data System (ADS)
Lin, Chungwei; Wang, Bingnan; Teo, Koon Hoo
2016-06-01
We examine the superconducting solution in the Holstein model, where the conduction electrons couple to the dispersionless boson fields, using the Migdal-Eliashberg theory and dynamical mean field theory. Although different in numerical values, both methods imply the existence of an optimal boson energy for superconductivity at a given electron-boson coupling. This nonmonotonous behavior can be understood as an interplay between the polaron and superconducting physics, as the electron-boson coupling is the origin of the superconductor, but at the same time traps the conduction electrons making the system more insulating. Our calculation provides a simple explanation of the recent experiment on sulfur hydride, where an optimal pressure for the superconductivity was observed. The validities of both methods are discussed.
Optimal dividends in the Brownian motion risk model with interest
NASA Astrophysics Data System (ADS)
Fang, Ying; Wu, Rong
2009-07-01
In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.
Using Cotton Model Simulations to Estimate Optimally Profitable Irrigation Strategies
NASA Astrophysics Data System (ADS)
Mauget, S. A.; Leiker, G.; Sapkota, P.; Johnson, J.; Maas, S.
2011-12-01
In recent decades irrigation pumping from the Ogallala Aquifer has led to declines in saturated thickness that have not been compensated for by natural recharge, which has led to questions about the long-term viability of agriculture in the cotton producing areas of west Texas. Adopting irrigation management strategies that optimize profitability while reducing irrigation waste is one way of conserving the aquifer's water resource. Here, a database of modeled cotton yields generated under drip and center pivot irrigated and dryland production scenarios is used in a stochastic dominance analysis that identifies such strategies under varying commodity price and pumping cost conditions. This database and analysis approach will serve as the foundation for a web-based decision support tool that will help producers identify optimal irrigation treatments under specified cotton price, electricity cost, and depth to water table conditions.
In vitro placental model optimization for nanoparticle transport studies
Cartwright, Laura; Poulsen, Marie Sønnegaard; Nielsen, Hanne Mørck; Pojana, Giulio; Knudsen, Lisbeth E; Saunders, Margaret; Rytting, Erik
2012-01-01
Background Advances in biomedical nanotechnology raise hopes in patient populations but may also raise questions regarding biodistribution and biocompatibility, especially during pregnancy. Special consideration must be given to the placenta as a biological barrier because a pregnant woman’s exposure to nanoparticles could have significant effects on the fetus developing in the womb. Therefore, the purpose of this study is to optimize an in vitro model for characterizing the transport of nanoparticles across human placental trophoblast cells. Methods The growth of BeWo (clone b30) human placental choriocarcinoma cells for nanoparticle transport studies was characterized in terms of optimized Transwell® insert type and pore size, the investigation of barrier properties by transmission electron microscopy, tight junction staining, transepithelial electrical resistance, and fluorescein sodium transport. Following the determination of nontoxic concentrations of fluorescent polystyrene nanoparticles, the cellular uptake and transport of 50 nm and 100 nm diameter particles was measured using the in vitro BeWo cell model. Results Particle size measurements, fluorescence readings, and confocal microscopy indicated both cellular uptake of the fluorescent polystyrene nanoparticles and the transcellular transport of these particles from the apical (maternal) to the basolateral (fetal) compartment. Over the course of 24 hours, the apparent permeability across BeWo cells grown on polycarbonate membranes (3.0 μm pore size) was four times higher for the 50 nm particles compared with the 100 nm particles. Conclusion The BeWo cell line has been optimized and shown to be a valid in vitro model for studying the transplacental transport of nanoparticles. Fluorescent polystyrene nanoparticle transport was size-dependent, as smaller particles reached the basal (fetal) compartment at a higher rate. PMID:22334780
Design Oriented Structural Modeling for Airplane Conceptual Design Optimization
NASA Technical Reports Server (NTRS)
Livne, Eli
1999-01-01
The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.
Proficient brain for optimal performance: the MAP model perspective
di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio
2016-01-01
Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557
Proficient brain for optimal performance: the MAP model perspective.
Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio
2016-01-01
Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557
The reproductive value in distributed optimal control models.
Wrzaczek, Stefan; Kuhn, Michael; Prskawetz, Alexia; Feichtinger, Gustav
2010-05-01
We show that in a large class of distributed optimal control models (DOCM), where population is described by a McKendrick type equation with an endogenous number of newborns, the reproductive value of Fisher shows up as part of the shadow price of the population. Depending on the objective function, the reproductive value may be negative. Moreover, we show results of the reproductive value for changing vital rates. To motivate and demonstrate the general framework, we provide examples in health economics, epidemiology, and population biology. PMID:20096297
Numerical Modeling and Optimization of Warm-water Heat Sinks
NASA Astrophysics Data System (ADS)
Hadad, Yaser; Chiarot, Paul
2015-11-01
For cooling in large data-centers and supercomputers, water is increasingly replacing air as the working fluid in heat sinks. Utilizing water provides unique capabilities; for example: higher heat capacity, Prandtl number, and convection heat transfer coefficient. The use of warm, rather than chilled, water has the potential to provide increased energy efficiency. The geometric and operating parameters of the heat sink govern its performance. Numerical modeling is used to examine the influence of geometry and operating conditions on key metrics such as thermal and flow resistance. This model also facilitates studies on cooling of electronic chip hot spots and failure scenarios. We report on the optimal parameters for a warm-water heat sink to achieve maximum cooling performance.
Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana
2016-09-01
The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal
Optimization model for UV-Riboflavin corneal cross-linking
NASA Astrophysics Data System (ADS)
Schumacher, S.; Wernli, J.; Scherrer, S.; Bueehler, M.; Seiler, T.; Mrochen, M.
2011-03-01
Nowadays UV-cross-linking is an established method for the treatment of keraectasia. Currently a standardized protocol is used for the cross-linking treatment. We will now present a theoretical model which predicts the number of induced crosslinks in the corneal tissue, in dependence of the Riboflavin concentration, the radiation intensity, the pre-treatment time and the treatment time. The model is developed by merging the difussion equation, the equation for the light distribution in dependence on the absorbers in the tissue and a rate equation for the polymerization process. A higher concentration of Riboflavin solution as well as a higher irradiation intensity will increase the number of induced crosslinks. However, performed stress-strain experiments which support the model showed that higher Riboflavin concentrations (> 0.125%) do not result in a further increase in stability of the corneal tissue. This is caused by the inhomogeneous distribution of induced crosslinks throughout the cornea due to the uneven absorption of the UV-light. The new model offers the possibility to optimize the treatment individually for every patient depending on their corneal thickness in terms of efficiency, saftey and treatment time.
Computer model for characterizing, screening, and optimizing electrolyte systems
Gering, Kevin L.
2015-06-15
Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.
Performance Optimization of NEMO Oceanic Model at High Resolution
NASA Astrophysics Data System (ADS)
Epicoco, Italo; Mocavero, Silvia; Aloisio, Giovanni
2014-05-01
The NEMO oceanic model is based on the Navier-Stokes equations along with a nonlinear equation of state, which couples the two active tracers (temperature and salinity) to the fluid velocity. The code is written in Fortan 90 and parallelized using MPI. The resolution of the global ocean models used today for climate change studies limits the prediction accuracy. To overcome this limit, a new high-resolution global model, based on NEMO, simulating at 1/16° and 100 vertical levels has been developed at CMCC. The model is computational and memory intensive, so it requires many resources to be run. An optimization activity is needed. The strategy requires a preliminary analysis to highlight scalability bottlenecks. It has been performed on a SandyBridge architecture at CMCC. An efficiency of 48% on 7K cores (the maximum available) has been achieved. The analysis has been also carried out at routine level, so that the improvement actions could be designed for the entire code or for the single kernel. The analysis highlighted for example a loss of performance due to the routine used to implement the north fold algorithm (i.e. handling the points at the north pole of the 3-poles Grids): indeed an optimization of the routine implementation is needed. The folding is achieved considering only the last 4 rows on the top of the global domain and by applying a rotation pivoting on the point in the middle. During the folding, the point on the top left is updated with the value of the point on bottom right and so on. The current version of the parallel algorithm is based on the domain decomposition. Each MPI process takes care of a block of points. Each process can update its points using values belonging to the symmetric process. In the current implementation, each received message is placed in a buffer with a number of elements equal to the total dimension of the global domain. Each process sweeps the entire buffer, but only a part of that computation is really useful for the
20nm CMP model calibration with optimized metrology data and CMP model applications
NASA Astrophysics Data System (ADS)
Katakamsetty, Ushasree; Koli, Dinesh; Yeo, Sky; Hui, Colin; Ghulghazaryan, Ruben; Aytuna, Burak; Wilson, Jeff
2015-03-01
Chemical Mechanical Polishing (CMP) is the essential process for planarization of wafer surface in semiconductor manufacturing. CMP process helps to produce smaller ICs with more electronic circuits improving chip speed and performance. CMP also helps to increase throughput and yield, which results in reduction of IC manufacturer's total production costs. CMP simulation model will help to early predict CMP manufacturing hotspots and minimize the CMP and CMP induced Lithography and Etch defects [2]. In the advanced process nodes, conventional dummy fill insertion for uniform density is not able to address all the CMP short-range, long-range, multi-layer stacking and other effects like pad conditioning, slurry selectivity, etc. In this paper, we present the flow for 20nm CMP modeling using Mentor Graphics CMP modeling tools to build a multilayer Cu-CMP model and study hotspots. We present the inputs required for good CMP model calibration, challenges faced with metrology collections and techniques to optimize the wafer cost. We showcase the CMP model validation results and the model applications to predict multilayer topography accumulation affects for hotspot detection. We provide the flow for early detection of CMP hotspots with Calibre CMPAnalyzer to improve Design-for-Manufacturability (DFM) robustness.
Modeling marine surface microplastic transport to assess optimal removal locations
NASA Astrophysics Data System (ADS)
Sherman, Peter; van Sebille, Erik
2016-01-01
Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the impact on ecosystems, using plankton growth as a proxy. The simulations show that the optimal removal locations are primarily located off the coast of China and in the Indonesian Archipelago for both scenarios. Our estimates show that 31% of the modeled microplastic mass can be removed by 2025 using 29 plastic collectors operating at a 45% capture efficiency from these locations, compared to only 17% when the 29 plastic collectors are moored in the North Pacific garbage patch, between Hawaii and California. The overlap of ocean surface microplastics and phytoplankton growth can be reduced by 46% at our proposed locations, while sinks in the North Pacific can only reduce the overlap by 14%. These results are an indication that oceanic plastic removal might be more effective in removing a greater microplastic mass and in reducing potential harm to marine life when closer to shore than inside the plastic accumulation zones in the centers of the gyres.