Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I
2016-12-01
Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.
An optimization model for metabolic pathways.
Planes, F J; Beasley, J E
2009-10-15
Different mathematical methods have emerged in the post-genomic era to determine metabolic pathways. These methods can be divided into stoichiometric methods and path finding methods. In this paper we detail a novel optimization model, based upon integer linear programming, to determine metabolic pathways. Our model links reaction stoichiometry with path finding in a single approach. We test the ability of our model to determine 40 annotated Escherichia coli metabolic pathways. We show that our model is able to determine 36 of these 40 pathways in a computationally effective manner.
Optimal moment determination in POME-copula based hydrometeorological dependence modelling
NASA Astrophysics Data System (ADS)
Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi
2017-07-01
Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.
Portfolio optimization by using linear programing models based on genetic algorithm
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
Development of a 3D log sawing optimization system for small sawmills in central Appalachia, US
Wenshu Lin; Jingxin Wang; Edward Thomas
2011-01-01
A 3D log sawing optimization system was developed to perform log generation, opening face determination, sawing simulation, and lumber grading using 3D modeling techniques. Heuristic and dynamic programming algorithms were used to determine opening face and grade sawing optimization. Positions and shapes of internal log defects were predicted using a model developed by...
Ayvaz, M Tamer
2010-09-20
This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Method to determine the optimal constitutive model from spherical indentation tests
NASA Astrophysics Data System (ADS)
Zhang, Tairui; Wang, Shang; Wang, Weiqiang
2018-03-01
The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE) calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang's modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in) into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DuPont, Bryony; Cagan, Jonathan; Moriarty, Patrick
This paper presents a system of modeling advances that can be applied in the computational optimization of wind plants. These modeling advances include accurate cost and power modeling, partial wake interaction, and the effects of varying atmospheric stability. To validate the use of this advanced modeling system, it is employed within an Extended Pattern Search (EPS)-Multi-Agent System (MAS) optimization approach for multiple wind scenarios. The wind farm layout optimization problem involves optimizing the position and size of wind turbines such that the aerodynamic effects of upstream turbines are reduced, which increases the effective wind speed and resultant power at eachmore » turbine. The EPS-MAS optimization algorithm employs a profit objective, and an overarching search determines individual turbine positions, with a concurrent EPS-MAS determining the optimal hub height and rotor diameter for each turbine. Two wind cases are considered: (1) constant, unidirectional wind, and (2) three discrete wind speeds and varying wind directions, each of which have a probability of occurrence. Results show the advantages of applying the series of advanced models compared to previous application of an EPS with less advanced models to wind farm layout optimization, and imply best practices for computational optimization of wind farms with improved accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
The optimization problems of CP operation
NASA Astrophysics Data System (ADS)
Kler, A. M.; Stepanova, E. L.; Maximov, A. S.
2017-11-01
The problem of enhancing energy and economic efficiency of CP is urgent indeed. One of the main methods for solving it is optimization of CP operation. To solve the optimization problems of CP operation, Energy Systems Institute, SB of RAS, has developed a software. The software makes it possible to make optimization calculations of CP operation. The software is based on the techniques and software tools of mathematical modeling and optimization of heat and power installations. Detailed mathematical models of new equipment have been developed in the work. They describe sufficiently accurately the processes that occur in the installations. The developed models include steam turbine models (based on the checking calculation) which take account of all steam turbine compartments and regeneration system. They also enable one to make calculations with regenerative heaters disconnected. The software for mathematical modeling of equipment and optimization of CP operation has been developed. It is based on the technique for optimization of CP operating conditions in the form of software tools and integrates them in the common user interface. The optimization of CP operation often generates the need to determine the minimum and maximum possible total useful electricity capacity of the plant at set heat loads of consumers, i.e. it is necessary to determine the interval on which the CP capacity may vary. The software has been applied to optimize the operating conditions of the Novo-Irkutskaya CP of JSC “Irkutskenergo”. The efficiency of operating condition optimization and the possibility for determination of CP energy characteristics that are necessary for optimization of power system operation are shown.
Optimizing model: insemination, replacement, seasonal production, and cash flow.
DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A
1992-03-01
Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-10-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-01-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245
Simulation Research on Vehicle Active Suspension Controller Based on G1 Method
NASA Astrophysics Data System (ADS)
Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui
2017-09-01
Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.
A multidimensional model of optimal participation of children with physical disabilities.
Kang, Lin-Ju; Palisano, Robert J; King, Gillian A; Chiarello, Lisa A
2014-01-01
To present a conceptual model of optimal participation in recreational and leisure activities for children with physical disabilities. The conceptualization of the model was based on review of contemporary theories and frameworks, empirical research and the authors' practice knowledge. A case scenario is used to illustrate application to practice. The model proposes that optimal participation in recreational and leisure activities involves the dynamic interaction of multiple dimensions and determinants of participation. The three dimensions of participation are physical, social and self-engagement. Determinants of participation encompass attributes of the child, family and environment. Experiences of optimal participation are hypothesized to result in long-term benefits including better quality of life, a healthier lifestyle and emotional and psychosocial well-being. Consideration of relevant child, family and environment determinants of dimensions of optimal participation should assist children, families and health care professionals to identify meaningful goals and outcomes and guide the selection and implementation of innovative therapy approaches and methods of service delivery. Implications for Rehabilitation Optimal participation is proposed to involve the dynamic interaction of physical, social and self-engagement and attributes of the child, family and environment. The model emphasizes the importance of self-perceptions and participation experiences of children with physical disabilities. Optimal participation may have a positive influence on quality of life, a healthy lifestyle and emotional and psychosocial well-being. Knowledge of child, family, and environment determinants of physical, social and self-engagement should assist children, families and professionals in identifying meaningful goals and guiding innovative therapy approaches.
Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations
NASA Astrophysics Data System (ADS)
Romanihin, S. M.; Tronin, I. V.
2016-09-01
We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Veerakamolmal, Pitipong; Lee, Yung-Joon; Fasano, J. P.; Hale, Rhea; Jacques, Mary
2002-02-01
In recent years, there has been increased focus by regulators, manufacturers, and consumers on the issue of product end of life management for electronics. This paper presents an overview of a conceptual study designed to examine the costs and benefits of several different Product Take Back (PTB) scenarios for used electronics equipment. The study utilized a reverse logistics supply chain model to examine the effects of several different factors in PTB programs. The model was done using the IBM supply chain optimization tool known as WIT (Watson Implosion Technology). Using the WIT tool, we were able to determine a theoretical optimal cost scenario for PTB programs. The study was designed to assist IBM internally in determining theoretical optimal Product Take Back program models and determining potential incentives for increasing participation rates.
NASA Astrophysics Data System (ADS)
Sundara Rajan, R.; Uthayakumar, R.
2017-12-01
In this paper we develop an economic order quantity model to investigate the optimal replenishment policies for instantaneous deteriorating items under inflation and trade credit. Demand rate is a linear function of selling price and decreases negative exponentially with time over a finite planning horizon. Shortages are allowed and partially backlogged. Under these conditions, we model the retailer's inventory system as a profit maximization problem to determine the optimal selling price, optimal order quantity and optimal replenishment time. An easy-to-use algorithm is developed to determine the optimal replenishment policies for the retailer. We also provide optimal present value of profit when shortages are completely backlogged as a special case. Numerical examples are presented to illustrate the algorithm provided to obtain optimal profit. And we also obtain managerial implications from numerical examples to substantiate our model. The results show that there is an improvement in total profit from complete backlogging rather than the items being partially backlogged.
NASA Astrophysics Data System (ADS)
Jiang, Hao; Lu, Jiangang
2018-05-01
Corn starch is an important material which has been traditionally used in the fields of food and chemical industry. In order to enhance the rapidness and reliability of the determination for starch content in corn, a methodology is proposed in this work, using an optimal CC-PLSR-RBFNN calibration model and near-infrared (NIR) spectroscopy. The proposed model was developed based on the optimal selection of crucial parameters and the combination of correlation coefficient method (CC), partial least squares regression (PLSR) and radial basis function neural network (RBFNN). To test the performance of the model, a standard NIR spectroscopy data set was introduced, containing spectral information and chemical reference measurements of 80 corn samples. For comparison, several other models based on the identical data set were also briefly discussed. In this process, the root mean square error of prediction (RMSEP) and coefficient of determination (Rp2) in the prediction set were used to make evaluations. As a result, the proposed model presented the best predictive performance with the smallest RMSEP (0.0497%) and the highest Rp2 (0.9968). Therefore, the proposed method combining NIR spectroscopy with the optimal CC-PLSR-RBFNN model can be helpful to determine starch content in corn.
Optimal Experimental Design for Model Discrimination
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan
2007-05-01
In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.
Optimal Control Inventory Stochastic With Production Deteriorating
NASA Astrophysics Data System (ADS)
Affandi, Pardi
2018-01-01
In this paper, we are using optimal control approach to determine the optimal rate in production. Most of the inventory production models deal with a single item. First build the mathematical models inventory stochastic, in this model we also assume that the items are in the same store. The mathematical model of the problem inventory can be deterministic and stochastic models. In this research will be discussed how to model the stochastic as well as how to solve the inventory model using optimal control techniques. The main tool in the study problems for the necessary optimality conditions in the form of the Pontryagin maximum principle involves the Hamilton function. So we can have the optimal production rate in a production inventory system where items are subject deterioration.
Optimal Dynamic Advertising Strategy Under Age-Specific Market Segmentation
NASA Astrophysics Data System (ADS)
Krastev, Vladimir
2011-12-01
We consider the model proposed by Faggian and Grosset for determining the advertising efforts and goodwill in the long run of a company under age segmentation of consumers. Reducing this model to optimal control sub problems we find the optimal advertising strategy and goodwill.
The American-Soviet Symposium on Use of Mathematical Models to Optimize Water Quality Management examines methodological questions related to simulation and optimization modeling of processes that determine water quality of river basins. Discussants describe the general state of ...
Reactive flow model development for PBXW-126 using modern nonlinear optimization methods
NASA Astrophysics Data System (ADS)
Murphy, M. J.; Simpson, R. L.; Urtiew, P. A.; Souers, P. C.; Garcia, F.; Garza, R. G.
1996-05-01
The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition+two growth terms) have been found using nonlinear optimization methods to determine the "best" set of model parameters. The ignition term treats the initiation of up to 0.5% of the RDX. The first growth term in the model treats the RDX growth of reaction up to 20% reacted. The second growth term treats the subsequent growth of reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the "best" set of coefficients for the three term Lee-Tarver ignition and growth of reaction model.
SCI model structure determination program (OSR) user's guide. [optimal subset regression
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.
Neighboring extremal optimal control design including model mismatch errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.J.; Hull, D.G.
1994-11-01
The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Heru Tjahjana, R.
2017-01-01
In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.
Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.
An, Yan; Zou, Zhihong; Zhao, Yanfei
2015-03-01
An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorgensen, S.
Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
A two-stage stochastic rule-based model to determine pre-assembly buffer content
NASA Astrophysics Data System (ADS)
Gunay, Elif Elcin; Kula, Ufuk
2018-01-01
This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Optimal Path Determination for Flying Vehicle to Search an Object
NASA Astrophysics Data System (ADS)
Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.
2018-01-01
In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.
Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results
NASA Astrophysics Data System (ADS)
Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef
2017-04-01
The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.
Wang, Yongjiang; Pang, Li; Liu, Xinyu; Wang, Yuansheng; Zhou, Kexun; Luo, Fei
2016-04-01
A comprehensive model of thermal balance and degradation kinetics was developed to determine the optimal reactor volume and insulation material. Biological heat production and five channels of heat loss were considered in the thermal balance model for a representative reactor. Degradation kinetics was developed to make the model applicable to different types of substrates. Simulation of the model showed that the internal energy accumulation of compost was the significant heat loss channel, following by heat loss through reactor wall, and latent heat of water evaporation. Lower proportion of heat loss occurred through the reactor wall when the reactor volume was larger. Insulating materials with low densities and low conductive coefficients were more desirable for building small reactor systems. Model developed could be used to determine the optimal reactor volume and insulation material needed before the fabrication of a lab-scale composting system. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multi-objective trajectory optimization for the space exploration vehicle
NASA Astrophysics Data System (ADS)
Qin, Xiaoli; Xiao, Zhen
2016-07-01
The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.
Process modeling for carbon-phenolic nozzle materials
NASA Technical Reports Server (NTRS)
Letson, Mischell A.; Bunker, Robert C.; Remus, Walter M., III; Clinton, R. G.
1989-01-01
A thermochemical model based on the SINDA heat transfer program is developed for carbon-phenolic nozzle material processes. The model can be used to optimize cure cycles and to predict material properties based on the types of materials and the process by which these materials are used to make nozzle components. Chemical kinetic constants for Fiberite MX4926 were determined so that optimization of cure cycles for the current Space Shuttle Solid Rocket Motor nozzle rings can be determined.
Use of multilevel modeling for determining optimal parameters of heat supply systems
NASA Astrophysics Data System (ADS)
Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.
2017-07-01
The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.
Reactive flow model development for PBXW-126 using modern nonlinear optimization methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.; Simpson, R.L.; Urtiew, P.A.
1995-08-01
The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition + two growth terms) have been found using nonlinear optimization methods to determine the {open_quotes}best{close_quotes} set of model parameters. The ignition term treats the initiation of up to 0.5% of the RDX The first growth term in the model treats the RDX growth of reaction up to 20% reacted. The second growth term treats the subsequentmore » growth of reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the {open_quotes}best{close_quotes} set of coefficients for the three term Lee-Tarver ignition and growth of reaction model.« less
Optimal segmentation and packaging process
Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.
1999-01-01
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-05-01
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
A review of distributed parameter groundwater management modeling methods
Gorelick, Steven M.
1983-01-01
Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.
A Review of Distributed Parameter Groundwater Management Modeling Methods
NASA Astrophysics Data System (ADS)
Gorelick, Steven M.
1983-04-01
Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.
Conceptual design and multidisciplinary optimization of in-plane morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku; Sanders, Brian P.; Joo, James J.
2006-03-01
In this paper, the topology optimization methodology for the synthesis of distributed actuation system with specific applications to the morphing air vehicle is discussed. The main emphasis is placed on the topology optimization problem formulations and the development of computational modeling concepts. For demonstration purposes, the inplane morphing wing model is presented. The analysis model is developed to meet several important criteria: It must allow large rigid-body displacements, as well as variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Preliminary work has indicated that addressed modeling concept meets the criteria and may be suitable for the purpose. Topology optimization is performed on the ground structure based on this modeling concept with design variables that control the system configuration. In other words, states of each element in the model are design variables and they are to be determined through optimization process. In effect, the optimization process assigns morphing members as 'soft' elements, non-morphing load-bearing members as 'stiff' elements, and non-existent members as 'voids.' In addition, the optimization process determines the location and relative force intensities of distributed actuators, which is represented computationally as equal and opposite nodal forces with soft axial stiffness. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of formulation itself. Sample in-plane morphing problems are solved to demonstrate the potential capability of the methodology introduced in this paper.
Operations research investigations of satellite power stations
NASA Technical Reports Server (NTRS)
Cole, J. W.; Ballard, J. L.
1976-01-01
A systems model reflecting the design concepts of Satellite Power Stations (SPS) was developed. The model is of sufficient scope to include the interrelationships of the following major design parameters: the transportation to and between orbits; assembly of the SPS; and maintenance of the SPS. The systems model is composed of a set of equations that are nonlinear with respect to the system parameters and decision variables. The model determines a figure of merit from which alternative concepts concerning transportation, assembly, and maintenance of satellite power stations are studied. A hybrid optimization model was developed to optimize the system's decision variables. The optimization model consists of a random search procedure and the optimal-steepest descent method. A FORTRAN computer program was developed to enable the user to optimize nonlinear functions using the model. Specifically, the computer program was used to optimize Satellite Power Station system components.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Sunarsih; Kartono
2018-01-01
In this paper, a mathematical model in quadratic programming with fuzzy parameter is proposed to determine the optimal strategy for integrated inventory control and supplier selection problem with fuzzy demand. To solve the corresponding optimization problem, we use the expected value based fuzzy programming. Numerical examples are performed to evaluate the model. From the results, the optimal amount of each product that have to be purchased from each supplier for each time period and the optimal amount of each product that have to be stored in the inventory for each time period were determined with minimum total cost and the inventory level was sufficiently closed to the reference level.
Optimization of vehicle deceleration to reduce occupant injury risks in frontal impact.
Mizuno, Koji; Itakura, Takuya; Hirabayashi, Satoko; Tanaka, Eiichi; Ito, Daisuke
2014-01-01
In vehicle frontal impacts, vehicle acceleration has a large effect on occupant loadings and injury risks. In this research, an optimal vehicle crash pulse was determined systematically to reduce injury measures of rear seat occupants by using mathematical simulations. The vehicle crash pulse was optimized based on a vehicle deceleration-deformation diagram under the conditions that the initial velocity and the maximum vehicle deformation were constant. Initially, a spring-mass model was used to understand the fundamental parameters for optimization. In order to investigate the optimization under a more realistic situation, the vehicle crash pulse was also optimized using a multibody model of a Hybrid III dummy seated in the rear seat for the objective functions of chest acceleration and chest deflection. A sled test using a Hybrid III dummy was carried out to confirm the simulation results. Finally, the optimal crash pulses determined from the multibody simulation were applied to a human finite element (FE) model. The optimized crash pulse to minimize the occupant deceleration had a concave shape: a high deceleration in the initial phase, low in the middle phase, and high again in the final phase. This crash pulse shape depended on the occupant restraint stiffness. The optimized crash pulse determined from the multibody simulation was comparable to that from the spring-mass model. From the sled test, it was demonstrated that the optimized crash pulse was effective for the reduction of chest acceleration. The crash pulse was also optimized for the objective function of chest deflection. The optimized crash pulse in the final phase was lower than that obtained for the minimization of chest acceleration. In the FE analysis of the human FE model, the optimized pulse for the objective function of the Hybrid III chest deflection was effective in reducing rib fracture risks. The optimized crash pulse has a concave shape and is dependent on the occupant restraint stiffness and maximum vehicle deformation. The shapes of the optimized crash pulse in the final phase were different for the objective functions of chest acceleration and chest deflection due to the inertial forces of the head and upper extremities. From the human FE model analysis it was found that the optimized crash pulse for the Hybrid III chest deflection can substantially reduce the risk of rib cage fractures. Supplemental materials are available for this article. Go to the publisher's online edition of Traffic Injury Prevention to view the supplemental file.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
Design of Experiments for the Thermal Characterization of Metallic Foam
NASA Technical Reports Server (NTRS)
Crittenden, Paul E.; Cole, Kevin D.
2003-01-01
Metallic foams are being investigated for possible use in the thermal protection systems of reusable launch vehicles. As a result, the performance of these materials needs to be characterized over a wide range of temperatures and pressures. In this paper a radiation/conduction model is presented for heat transfer in metallic foams. Candidates for the optimal transient experiment to determine the intrinsic properties of the model are found by two methods. First, an optimality criterion is used to find an experiment to find all of the parameters using one heating event. Second, a pair of heating events is used to determine the parameters in which one heating event is optimal for finding the parameters related to conduction, while the other heating event is optimal for finding the parameters associated with radiation. Simulated data containing random noise was analyzed to determine the parameters using both methods. In all cases the parameter estimates could be improved by analyzing a larger data record than suggested by the optimality criterion.
Modeling and Error Analysis of a Superconducting Gravity Gradiometer.
1979-08-01
fundamental limit to instrument - -1- sensitivity is the thermal noise of the sensor . For the gradiometer design outlined above, the best sensitivity...Mapoles at Stanford. Chapter IV determines the relation between dynamic range, the sensor Q, and the thermal noise of the cryogenic accelerometer. An...C.1 Accelerometer Optimization (1) Development and optimization of the loaded diaphragm sensor . (2) Determination of the optimal values of the
NASA Astrophysics Data System (ADS)
Klesh, Andrew T.
This dissertation studies optimal exploration, defined as the collection of information about given objects of interest by a mobile agent (the explorer) using imperfect sensors. The key aspects of exploration are kinematics (which determine how the explorer moves in response to steering commands), energetics (which determine how much energy is consumed by motion and maneuvers), informatics (which determine the rate at which information is collected) and estimation (which determines the states of the objects). These aspects are coupled by the steering decisions of the explorer. We seek to improve exploration by finding trade-offs amongst these couplings and the components of exploration: the Mission, the Path and the Agent. A comprehensive model of exploration is presented that, on one hand, accounts for these couplings and on the other hand is simple enough to allow analysis. This model is utilized to pose and solve several exploration problems where an objective function is to be minimized. Specific functions to be considered are the mission duration and the total energy. These exploration problems are formulated as optimal control problems and necessary conditions for optimality are obtained in the form of two-point boundary value problems. An analysis of these problems reveals characteristics of optimal exploration paths. Several regimes are identified for the optimal paths including the Watchtower, Solar and Drag regime, and several non-dimensional parameters are derived that determine the appropriate regime of travel. The so-called Power Ratio is shown to predict the qualitative features of the optimal paths, provide a metric to evaluate an aircrafts design and determine an aircrafts capability for flying perpetually. Optimal exploration system drivers are identified that provide perspective as to the importance of these various regimes of flight. A bank-to-turn solar-powered aircraft flying at constant altitude on Mars is used as a specific platform for analysis using the coupled model. Flight-paths found with this platform are presented that display the optimal exploration problem characteristics. These characteristics are used to form heuristics, such as a Generalized Traveling Salesman Problem solver, to simplify the exploration problem. These heuristics are used to empirically show the successful completion of an exploration mission by a physical explorer.
Crash pulse optimization for occupant protection at various impact velocities.
Ito, Daisuke; Yokoi, Yusuke; Mizuno, Koji
2015-01-01
Vehicle deceleration has a large influence on occupant kinematic behavior and injury risks in crashes, and the optimization of the vehicle crash pulse that mitigates occupant loadings has been the subject of substantial research. These optimization research efforts focused on only high-velocity impact in regulatory or new car assessment programs though vehicle collisions occur over a wide range of velocities. In this study, the vehicle crash pulse was optimized for various velocities with a genetic algorithm. Vehicle deceleration was optimized in a full-frontal rigid barrier crash with a simple spring-mass model that represents the vehicle-occupant interaction and a Hybrid III 50th percentile male multibody model. To examine whether the vehicle crash pulse optimized at the high impact velocity is useful for reducing occupant loading at all impact velocities less than the optimized velocity, the occupant deceleration was calculated at various velocities for the optimized crash pulse determined at a high speed. The optimized vehicle deceleration-deformation characteristics that are effective for various velocities were investigated with 2 approaches. The optimized vehicle crash pulse at a single impact velocity consists of a high initial impulse followed by zero deceleration and then constant deceleration in the final stage. The vehicle deceleration optimized with the Hybrid III model was comparable to that determined from the spring-mass model. The optimized vehicle deceleration-deformation characteristics determined at a high speed did not necessarily lead to an occupant deceleration reduction at a lower velocity. The maximum occupant deceleration at each velocity was normalized by the maximum deceleration determined in the single impact velocity optimization. The resulting vehicle deceleration-deformation characteristic was a square crash pulse. The objective function was defined as the number of injuries, which was the product of the number of collisions at the velocity and the probability of occupant injury. The optimized vehicle deceleration consisted of a high deceleration in the initial phase, a small deceleration in the middle phase, and then a high deceleration in the final phase. The optimized vehicle crash pulse at a single impact velocity is effective for reducing occupant deceleration in a crash at the specific impact velocity. However, the crash pulse does not necessarily lead to occupant deceleration reduction at a lower velocity. The optimized vehicle deceleration-deformation characteristics, which are effective for all impact velocities, depend on the weighting of the occupant injury measures at each impact velocity.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
Sootblowing optimization for improved boiler performance
James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J.
2012-12-25
A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.
Sootblowing optimization for improved boiler performance
James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J
2013-07-30
A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.
Open pit mining profit maximization considering selling stage and waste rehabilitation cost
NASA Astrophysics Data System (ADS)
Muttaqin, B. I. A.; Rosyidi, C. N.
2017-11-01
In open pit mining activities, determination of the cut-off grade becomes crucial for the company since the cut-off grade affects how much profit will be earned for the mining company. In this study, we developed a cut-off grade determination mode for the open pit mining industry considering the cost of mining, waste removal (rehabilitation) cost, processing cost, fixed cost, and selling stage cost. The main goal of this study is to develop a model of cut-off grade determination to get the maximum total profit. Secondly, this study is also developed to observe the model of sensitivity based on changes in the cost components. The optimization results show that the models can help mining company managers to determine the optimal cut-off grade and also estimate how much profit that can be earned by the mining company. To illustrate the application of the models, a numerical example and a set of sensitivity analysis are presented. From the results of sensitivity analysis, we conclude that the changes in the sales price greatly affects the optimal cut-off value and the total profit.
Gain optimization with non-linear controls
NASA Technical Reports Server (NTRS)
Slater, G. L.; Kandadai, R. D.
1984-01-01
An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.
Method for determining gene knockouts
Maranas, Costas D [Port Matilda, PA; Burgard, Anthony R [State College, PA; Pharkya, Priti [State College, PA
2011-09-27
A method for determining candidates for gene deletions and additions using a model of a metabolic network associated with an organism, the model includes a plurality of metabolic reactions defining metabolite relationships, the method includes selecting a bioengineering objective for the organism, selecting at least one cellular objective, forming an optimization problem that couples the at least one cellular objective with the bioengineering objective, and solving the optimization problem to yield at least one candidate.
Method for determining gene knockouts
Maranas, Costa D; Burgard, Anthony R; Pharkya, Priti
2013-06-04
A method for determining candidates for gene deletions and additions using a model of a metabolic network associated with an organism, the model includes a plurality of metabolic reactions defining metabolite relationships, the method includes selecting a bioengineering objective for the organism, selecting at least one cellular objective, forming an optimization problem that couples the at least one cellular objective with the bioengineering objective, and solving the optimization problem to yield at least one candidate.
Reactive flow model development for PBXW-126 using modern nonlinear optimization methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.; Simpson, R.L.; Urtiew, P.A.
1996-05-01
The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition+two growth terms) have been found using nonlinear optimization methods to determine the {open_quotes}best{close_quotes} set of model parameters. The ignition term treats the initiation of up to 0.5{percent} of the RDX. The first growth term in the model treats the RDX growth of reaction up to 20{percent} reacted. The second growth term treats the subsequent growth ofmore » reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the {open_quotes}best{close_quotes} set of coefficients for the three term Lee-Tarver ignition and growth of reaction model. {copyright} {ital 1996 American Institute of Physics.}« less
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
A New Model for a Carpool Matching Service.
Xia, Jizhe; Curtin, Kevin M; Li, Weihong; Zhao, Yonglong
2015-01-01
Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China) to demonstrate how carpool teams can be determined.
A New Model for a Carpool Matching Service
Xia, Jizhe; Curtin, Kevin M.; Li, Weihong; Zhao, Yonglong
2015-01-01
Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China) to demonstrate how carpool teams can be determined. PMID:26125552
The meaning of death: some simulations of a model of healthy and unhealthy consumption.
Forster, M
2001-07-01
Simulations of a model of healthy and unhealthy consumption are used to investigate the impact of various terminal conditions on life-span, pathways of health-related consumption and health. A model in which life-span and the 'death' stock of health are fixed is compared to versions in which (i) the 'death' stock of health is freely chosen; (ii) life-span is freely chosen; (iii) both the 'death' stock of health and life-span are freely chosen. The choice of terminal conditions has a striking impact on optimal plans. Results are discussed with reference to the existing demand for health literature and illustrate the application of iterative processes to determine optimal life-span, the role played by the marginal value of health capital in determining optimal plans, and the importance of checking the second-order conditions for the optimal choice of life-span.
NASA Astrophysics Data System (ADS)
Mishra, Vinod Kumar
2017-09-01
In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.
Optimal segmentation and packaging process
Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.
1999-08-10
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.
NASA Astrophysics Data System (ADS)
Khalilpourazari, Soheyl; Khalilpourazary, Saman
2017-05-01
In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.
OPTIMIZATION OF INTEGRATED URBAN WET-WEATHER CONTROL STRATEGIES
An optimization method for urban wet weather control (WWC) strategies is presented. The developed optimization model can be used to determine the most cost-effective strategies for the combination of centralized storage-release systems and distributed on-site WWC alternatives. T...
Optimal control of CPR procedure using hemodynamic circulation model
Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok
2007-12-25
A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.
HOMER: The hybrid optimization model for electric renewable
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lilienthal, P.; Flowers, L.; Rossmann, C.
1995-12-31
Hybrid renewable systems are often more cost-effective than grid extensions or isolated diesel generators for providing power to remote villages. There are a wide variety of hybrid systems being developed for village applications that have differing combinations of wind, photovoltaics, batteries, and diesel generators. Due to variations in loads and resources determining the most appropriate combination of these components for a particular village is a difficult modelling task. To address this design problem the National Renewable Energy Laboratory has developed the Hybrid Optimization Model for Electric Renewables (HOMER). Existing models are either too detailed for screening analysis or too simplemore » for reliable estimation of performance. HOMER is a design optimization model that determines the configuration, dispatch, and load management strategy that minimizes life-cycle costs for a particular site and application. This paper describes the HOMER methodology and presents representative results.« less
A Model for Determining School District Cash Flow Needs.
ERIC Educational Resources Information Center
Dembowski, Frederick L.
This paper discusses a model to optimize cash management in school districts. A brief discussion of the cash flow pattern of school districts is followed by an analysis of the constraints faced by the school districts in their investment planning process. A linear programming model used to optimize net interest earnings on investments is developed…
Methods of increasing efficiency and maintainability of pipeline systems
NASA Astrophysics Data System (ADS)
Ivanov, V. A.; Sokolov, S. M.; Ogudova, E. V.
2018-05-01
This study is dedicated to the issue of pipeline transportation system maintenance. The article identifies two classes of technical-and-economic indices, which are used to select an optimal pipeline transportation system structure. Further, the article determines various system maintenance strategies and strategy selection criteria. Meanwhile, the maintenance strategies turn out to be not sufficiently effective due to non-optimal values of maintenance intervals. This problem could be solved by running the adaptive maintenance system, which includes a pipeline transportation system reliability improvement algorithm, especially an equipment degradation computer model. In conclusion, three model building approaches for determining optimal technical systems verification inspections duration were considered.
Urine sampling and collection system optimization and testing
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Geating, J. A.; Koesterer, M. G.
1975-01-01
A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.
Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Lu, Ping
2016-01-01
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.
NASA Astrophysics Data System (ADS)
Moghaddam, Kamran S.; Usher, John S.
2011-07-01
In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.
2011-03-09
task stability, technology application certainty, risk, and transaction-specific investments impact the selection of the optimal mode of governance...technology application certainty, risk, and transaction-specific investments impact the selection of the optimal mode of governance. Our model views...U.S. Defense Industry. The 1990s were a perfect storm of technological change, consolidation , budget downturns, environmental uncertainty, and the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang
This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less
A hydroeconomic modeling framework for optimal integrated management of forest and water
NASA Astrophysics Data System (ADS)
Garcia-Prats, Alberto; del Campo, Antonio D.; Pulido-Velazquez, Manuel
2016-10-01
Forests play a determinant role in the hydrologic cycle, with water being the most important ecosystem service they provide in semiarid regions. However, this contribution is usually neither quantified nor explicitly valued. The aim of this study is to develop a novel hydroeconomic modeling framework for assessing and designing the optimal integrated forest and water management for forested catchments. The optimization model explicitly integrates changes in water yield in the stands (increase in groundwater recharge) induced by forest management and the value of the additional water provided to the system. The model determines the optimal schedule of silvicultural interventions in the stands of the catchment in order to maximize the total net benefit in the system. Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs according to stand density and canopy cover were modeled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. In order to illustrate the presented modeling framework, a case study was carried out in a planted pine forest (Pinus halepensis Mill.) located in south-western Valencia province (Spain). The optimized scenario increased groundwater recharge. This novel modeling framework can be used in the design of a "payment for environmental services" scheme in which water beneficiaries could contribute to fund and promote efficient forest management operations.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less
NASA Astrophysics Data System (ADS)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
Reynolds, Penny S; Tamariz, Francisco J; Barbee, Robert Wayne
2010-04-01
Exploratory pilot studies are crucial to best practice in research but are frequently conducted without a systematic method for maximizing the amount and quality of information obtained. We describe the use of response surface regression models and simultaneous optimization methods to develop a rat model of hemorrhagic shock in the context of chronic hypertension, a clinically relevant comorbidity. Response surface regression model was applied to determine optimal levels of two inputs--dietary NaCl concentration (0.49%, 4%, and 8%) and time on the diet (4, 6, 8 weeks)--to achieve clinically realistic and stable target measures of systolic blood pressure while simultaneously maximizing critical oxygen delivery (a measure of vulnerability to hemorrhagic shock) and body mass M. Simultaneous optimization of the three response variables was performed though a dimensionality reduction strategy involving calculation of a single aggregate measure, the "desirability" function. Optimal conditions for inducing systolic blood pressure of 208 mmHg, critical oxygen delivery of 4.03 mL/min, and M of 290 g were determined to be 4% [NaCl] for 5 weeks. Rats on the 8% diet did not survive past 7 weeks. Response surface regression model and simultaneous optimization method techniques are commonly used in process engineering but have found little application to date in animal pilot studies. These methods will ensure both the scientific and ethical integrity of experimental trials involving animals and provide powerful tools for the development of novel models of clinically interacting comorbidities with shock.
Liu, Hao; Shao, Qi; Fang, Xuelin
2017-02-01
For the class-E amplifier in a wireless power transfer (WPT) system, the design parameters are always determined by the nominal model. However, this model neglects the conduction loss and voltage stress of MOSFET and cannot guarantee the highest efficiency in the WPT system for biomedical implants. To solve this problem, this paper proposes a novel circuit model of the subnominal class-E amplifier. On a WPT platform for capsule endoscope, the proposed model was validated to be effective and the relationship between the amplifier's design parameters and its characteristics was analyzed. At a given duty ratio, the design parameters with the highest efficiency and safe voltage stress are derived and the condition is called 'optimal subnominal condition.' The amplifier's efficiency can reach the highest of 99.3% at the 0.097 duty ratio. Furthermore, at the 0.5 duty ratio, the measured efficiency of the optimal subnominal condition can reach 90.8%, which is 15.2% higher than that of the nominal condition. Then, a WPT experiment with a receiving unit was carried out to validate the feasibility of the optimized amplifier. In general, the design parameters of class-E amplifier in a WPT system for biomedical implants can be determined with the proposed optimization method in this paper.
Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.
Fintelman, D M; Sterling, M; Hemida, H; Li, F-X
2014-06-03
The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode
2008-12-01
To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.
NASA Astrophysics Data System (ADS)
Jayaweera, H. M. P. C.; Muhtaroğlu, Ali
2016-11-01
A novel model based methodology is presented to determine optimal device parameters for the fully integrated ultra low voltage DC-DC converter for energy harvesting applications. The proposed model feasibly contributes to determine the maximum efficient number of charge pump stages to fulfill the voltage requirement of the energy harvester application. The proposed DC-DC converter based power consumption model enables the analytical derivation of the charge pump efficiency when utilized simultaneously with the known LC tank oscillator behavior under resonant conditions, and voltage step up characteristics of the cross-coupled charge pump topology. The verification of the model has been done using a circuit simulator. The optimized system through the established model achieves more than 40% maximum efficiency yielding 0.45 V output with single stage, 0.75 V output with two stages, and 0.9 V with three stages for 2.5 kΩ, 3.5 kΩ and 5 kΩ loads respectively using 0.2 V input.
NASA Astrophysics Data System (ADS)
Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang
2014-07-01
Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.
Optimal design of clinical trials with biologics using dose-time-response models.
Lange, Markus R; Schmidli, Heinz
2014-12-30
Biologics, in particular monoclonal antibodies, are important therapies in serious diseases such as cancer, psoriasis, multiple sclerosis, or rheumatoid arthritis. While most conventional drugs are given daily, the effect of monoclonal antibodies often lasts for months, and hence, these biologics require less frequent dosing. A good understanding of the time-changing effect of the biologic for different doses is needed to determine both an adequate dose and an appropriate time-interval between doses. Clinical trials provide data to estimate the dose-time-response relationship with semi-mechanistic nonlinear regression models. We investigate how to best choose the doses and corresponding sample size allocations in such clinical trials, so that the nonlinear dose-time-response model can be precisely estimated. We consider both local and conservative Bayesian D-optimality criteria for the design of clinical trials with biologics. For determining the optimal designs, computer-intensive numerical methods are needed, and we focus here on the particle swarm optimization algorithm. This metaheuristic optimizer has been successfully used in various areas but has only recently been applied in the optimal design context. The equivalence theorem is used to verify the optimality of the designs. The methodology is illustrated based on results from a clinical study in patients with gout, treated by a monoclonal antibody. Copyright © 2014 John Wiley & Sons, Ltd.
Optimal blood glucose level control using dynamic programming based on minimal Bergman model
NASA Astrophysics Data System (ADS)
Rettian Anggita Sari, Maria; Hartono
2018-03-01
The purpose of this article is to simulate the glucose dynamic and the insulin kinetic of diabetic patient. The model used in this research is a non-linear Minimal Bergman model. Optimal control theory is then applied to formulate the problem in order to determine the optimal dose of insulin in the treatment of diabetes mellitus such that the glucose level is in the normal range for some specific time range. The optimization problem is solved using dynamic programming. The result shows that dynamic programming is quite reliable to represent the interaction between glucose and insulin levels in diabetes mellitus patient.
Optimal ordering and production policy for a recoverable item inventory system with learning effect
NASA Astrophysics Data System (ADS)
Tsai, Deng-Maw
2012-02-01
This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.
de Koning, Jos J; van der Zweep, Cees-Jan; Cornelissen, Jesper; Kuiper, Bouke
2013-03-01
Optimal pacing strategy was determined for breaking the world speed record on a human-powered vehicle (HPV) using an energy-flow model in which the rider's physical capacities, the vehicle's properties, and the environmental conditions were included. Power data from world-record attempts were compared with data from the model, and race protocols were adjusted to the results from the model. HPV performance can be improved by using an energy-flow model for optimizing race strategy. A biphased in-run followed by a sprint gave best results.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
NASA Astrophysics Data System (ADS)
Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon
2011-01-01
In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.
Applying complex models to poultry production in the future--economics and biology.
Talpaz, H; Cohen, M; Fancher, B; Halley, J
2013-09-01
The ability to determine the optimal broiler feed nutrient density that maximizes margin over feeding cost (MOFC) has obvious economic value. To determine optimal feed nutrient density, one must consider ingredient prices, meat values, the product mix being marketed, and the projected biological performance. A series of 8 feeding trials was conducted to estimate biological responses to changes in ME and amino acid (AA) density. Eight different genotypes of sex-separate reared broilers were fed diets varying in ME (2,723-3,386 kcal of ME/kg) and AA (0.89-1.65% digestible lysine with all essential AA acids being indexed to lysine) levels. Broilers were processed to determine carcass component yield at many different BW (1.09-4.70 kg). Trial data generated were used in model constructed to discover the dietary levels of ME and AA that maximize MOFC on a per broiler or per broiler annualized basis (bird × number of cycles/year). The model was designed to estimate the effects of dietary nutrient concentration on broiler live weight, feed conversion, mortality, and carcass component yield. Estimated coefficients from the step-wise regression process are subsequently used to predict the optimal ME and AA concentrations that maximize MOFC. The effects of changing feed or meat prices across a wide spectrum on optimal ME and AA levels can be evaluated via parametric analysis. The model can rapidly compare both biological and economic implications of changing from current practice to the simulated optimal solution. The model can be exploited to enhance decision making under volatile market conditions.
NASA Astrophysics Data System (ADS)
He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.
2013-12-01
Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that the optimized fw is best correlated linearly to soil water content at 5 to 10 cm depth. We also found that both the temporal scale or window size and the priori uncertainty of Vcmax (given as its standard deviation) are important in determining the seasonal trajectory of Vcmax. During the leaf expansion stage, an appropriate window size leads to reasonable estimate of Vcmax. In the summer, the fluctuation of optimized Vcmax is mainly caused by the uncertainties in Vcmax but not the window size. Our study suggests that a smooth Vcmax curve optimized from an optimal time window size is close to the reality though the RMSE of GPP at this window is not the minimum. It also suggests that for the accurate optimization of Vcmax, it is necessary to set appropriate levels of uncertainty of Vcmax in the spring and summer because the rate of leaf nitrogen concentration change is different over the season. Parameter optimizations for more sites and multi-years are in progress.
Flight-Test Validation and Flying Qualities Evaluation of a Rotorcraft UAV Flight Control System
NASA Technical Reports Server (NTRS)
Mettler, Bernard; Tuschler, Mark B.; Kanade, Takeo
2000-01-01
This paper presents a process of design and flight-test validation and flying qualities evaluation of a flight control system for a rotorcraft-based unmanned aerial vehicle (RUAV). The keystone of this process is an accurate flight-dynamic model of the aircraft, derived by using system identification modeling. The model captures the most relevant dynamic features of our unmanned rotorcraft, and explicitly accounts for the presence of a stabilizer bar. Using the identified model we were able to determine the performance margins of our original control system and identify limiting factors. The performance limitations were addressed and the attitude control system was 0ptimize.d for different three performance levels: slow, medium, fast. The optimized control laws will be implemented in our RUAV. We will first determine the validity of our control design approach by flight test validating our optimized controllers. Subsequently, we will fly a series of maneuvers with the three optimized controllers to determine the level of flying qualities that can be attained. The outcome enable us to draw important conclusions on the flying qualities requirements for small-scale RUAVs.
Liu, Guo-hai; Jiang, Hui; Xiao, Xia-hong; Zhang, Dong-juan; Mei, Cong-li; Ding, Yu-han
2012-04-01
Fourier transform near-infrared (FT-NIR) spectroscopy was attempted to determine pH, which is one of the key process parameters in solid-state fermentation of crop straws. First, near infrared spectra of 140 solid-state fermented product samples were obtained by near infrared spectroscopy system in the wavelength range of 10 000-4 000 cm(-1), and then the reference measurement results of pH were achieved by pH meter. Thereafter, the extreme learning machine (ELM) was employed to calibrate model. In the calibration model, the optimal number of PCs and the optimal number of hidden-layer nodes of ELM network were determined by the cross-validation. Experimental results showed that the optimal ELM model was achieved with 1040-1 topology construction as follows: R(p) = 0.961 8 and RMSEP = 0.104 4 in the prediction set. The research achievement could provide technological basis for the on-line measurement of the process parameters in solid-state fermentation.
Impact of a Flexible Evaluation System on Effort and Timing of Study
ERIC Educational Resources Information Center
Pacharn, Parunchana; Bay, Darlene; Felton, Sandra
2012-01-01
This paper examines results of a flexible grading system that allows each student to influence the weight allocated to each performance measure. We construct a stylized model to determine students' optimal responses. Our analytical model predicts different optimal strategies for students with varying academic abilities: a frontloading strategy for…
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin
2015-06-01
Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.
A Decision-making Model for a Two-stage Production-delivery System in SCM Environment
NASA Astrophysics Data System (ADS)
Feng, Ding-Zhong; Yamashiro, Mitsuo
A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.
Recovering metabolic pathways via optimization.
Beasley, John E; Planes, Francisco J
2007-01-01
A metabolic pathway is a coherent set of enzyme catalysed biochemical reactions by which a living organism transforms an initial (source) compound into a final (target) compound. Some of the different metabolic pathways adopted within organisms have been experimentally determined. In this paper, we show that a number of experimentally determined metabolic pathways can be recovered by a mathematical optimization model.
Modeling joint restoration strategies for interdependent infrastructure systems.
Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.
NASA Astrophysics Data System (ADS)
Crnomarkovic, Nenad; Belosevic, Srdjan; Tomanovic, Ivan; Milicevic, Aleksandar
2017-12-01
The effects of the number of significant figures (NSF) in the interpolation polynomial coefficients (IPCs) of the weighted sum of gray gases model (WSGM) on results of numerical investigations and WSGM optimization were investigated. The investigation was conducted using numerical simulations of the processes inside a pulverized coal-fired furnace. The radiative properties of the gas phase were determined using the simple gray gas model (SG), two-term WSGM (W2), and three-term WSGM (W3). Ten sets of the IPCs with the same NSF were formed for every weighting coefficient in both W2 and W3. The average and maximal relative difference values of the flame temperatures, wall temperatures, and wall heat fluxes were determined. The investigation showed that the results of numerical investigations were affected by the NSF unless it exceeded certain value. The increase in the NSF did not necessarily lead to WSGM optimization. The combination of the NSF (CNSF) was the necessary requirement for WSGM optimization.
Jin, Junchen
2016-01-01
The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998
NASA Astrophysics Data System (ADS)
Pando, V.; García-Laguna, J.; San-José, L. A.
2012-11-01
In this article, we integrate a non-linear holding cost with a stock-dependent demand rate in a maximising profit per unit time model, extending several inventory models studied by other authors. After giving the mathematical formulation of the inventory system, we prove the existence and uniqueness of the optimal policy. Relying on this result, we can obtain the optimal solution using different numerical algorithms. Moreover, we provide a necessary and sufficient condition to determine whether a system is profitable, and we establish a rule to check when a given order quantity is the optimal lot size of the inventory model. The results are illustrated through numerical examples and the sensitivity of the optimal solution with respect to changes in some values of the parameters is assessed.
Ebben, Matthew R; Narizhnaya, Mariya; Krieger, Ana C
2017-05-01
Numerous mathematical formulas have been developed to determine continuous positive airway pressure (CPAP) without an in-laboratory titration study. Recent studies have shown that style of CPAP mask can affect the optimal pressure requirement. However, none of the current models take mask style into account. Therefore, the goal of this study was to develop new predictive models of CPAP that take into account the style of mask interface. Data from 200 subjects with attended CPAP titrations during overnight polysomnograms using nasal masks and 132 subjects using oronasal masks were randomized and split into either a model development or validation group. Predictive models were then created in each model development group and the accuracy of the models was then tested in the model validation groups. The correlation between our new oronasal model and laboratory determined optimal CPAP was significant, r = 0.61, p < 0.001. Our nasal formula was also significantly related to laboratory determined optimal CPAP, r = 0.35, p < 0.001. The oronasal model created in our study significantly outperformed the original CPAP predictive model developed by Miljeteig and Hoffstein, z = 1.99, p < 0.05. The predictive performance of our new nasal model did not differ significantly from Miljeteig and Hoffstein's original model, z = -0.16, p < 0.90. The best predictors for the nasal mask group were AHI, lowest SaO2, and neck size, whereas the top predictors in the oronasal group were AHI and lowest SaO2. Our data show that predictive models of CPAP that take into account mask style can significantly improve the formula's accuracy. Most of the past models likely focused on model development with nasal masks (mask style used for model development was not typically reported in previous investigations) and are not well suited for patients using an oronasal interface. Our new oronasal CPAP prediction equation produced significantly improved performance compared to the well-known Miljeteig and Hoffstein formula in patients titrated on CPAP with an oronasal mask and was also significantly related to laboratory determined optimal CPAP.
ConvAn: a convergence analyzing tool for optimization of biochemical networks.
Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils
2012-01-01
Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media
Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.
2000-01-01
To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.
Protein construct storage: Bayesian variable selection and prediction with mixtures.
Clyde, M A; Parmigiani, G
1998-07-01
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.
Hadrich, Bilel; Akremi, Ismahen; Dammak, Mouna; Barkallah, Mohamed; Fendri, Imen; Abdelkafi, Slim
2018-04-17
Three steps are very important in order to produce microalgal lipids: (1) controlling microalgae cultivation via experimental and modeling investigations, (2) optimizing culture conditions to maximize lipids production and to determine the fatty acid profile the most appropriate for biodiesel synthesis, and (3) optimizing the extraction of the lipids accumulated in the microalgal cells. Firstly, three kinetics models, namely logistic, logistic-with-lag and modified Gompertz, were tested to fit the experimental kinetics of the Chlorella sp. microalga culture established on standard conditions. Secondly, the response-surface methodology was used for two optimizations in this study. The first optimization was established for lipids production from Chlorella sp. culture under different culture conditions. In fact, different levels of nitrate concentrations, salinities and light intensities were applied to the culture medium in order to study their influences on lipids production and determine their fatty acid profile. The second optimization was concerned with the lipids extraction factors: ultrasonic's time and temperature, and chloroform-methanol solvent ratio. All models (logistic, logistic-with-lag and modified Gompertz) applied for the experimental kinetics of Chlorella sp. show a very interesting fitting quality. The logistic model was chosen to describe the Chlorella sp. kinetics, since it yielded the most important statistical criteria: coefficient of determination of the order of 94.36%; adjusted coefficient of determination equal to 93.79% and root mean square error reaching 3.685 cells · ml - 1 . Nitrate concentration and the two interactions involving the light intensity (Nitrate concentration × light intensity, and salinities × light intensity) showed a very significant influence on lipids production in the first optimization (p < 0.05). Yet, only the quadratic term of chloroform-methanol solvent ratio showed a significant influence on lipids extraction relative to the second step of optimization (p < 0.05). The two most abundant fatty acid methyl esters (≈72%) derived from the Chlorella sp. microalga cultured in the determined optimal conditions are: palmitic acid (C16:0) and oleic acid (C18:1) with the corresponding yields of 51.69% and 20.55% of total fatty acids, respectively. Only the nitrate deficiency and the high intensity of light can influence the microalgal lipids production. The corresponding fatty acid methyl esters composition is very suitable for biodiesel production. Lipids extraction is efficient only over long periods of time when using a solvent with a 2/1 chloroform/methanol ratio.
Application of optimal control strategies to HIV-malaria co-infection dynamics
NASA Astrophysics Data System (ADS)
Fatmawati; Windarto; Hanif, Lathifah
2018-03-01
This paper presents a mathematical model of HIV and malaria co-infection transmission dynamics. Optimal control strategies such as malaria preventive, anti-malaria and antiretroviral (ARV) treatments are considered into the model to reduce the co-infection. First, we studied the existence and stability of equilibria of the presented model without control variables. The model has four equilibria, namely the disease-free equilibrium, the HIV endemic equilibrium, the malaria endemic equilibrium, and the co-infection equilibrium. We also obtain two basic reproduction ratios corresponding to the diseases. It was found that the disease-free equilibrium is locally asymptotically stable whenever their respective basic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. sic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. Then, the optimal control theory for the model was derived analytically by using Pontryagin Maximum Principle. Numerical simulations of the optimal control strategies are also performed to illustrate the results. From the numerical results, we conclude that the best strategy is to combine the malaria prevention and ARV treatments in order to reduce malaria and HIV co-infection populations.
NASA Astrophysics Data System (ADS)
Idris, M. A.; Jami, M. S.; Hammed, A. M.
2017-05-01
This paper presents the statistical optimization study of disinfection inactivation parameters of defatted Moringa oleifera seed extract on Pseudomonas aeruginosa bacterial cells. Three level factorial design was used to estimate the optimum range and the kinetics of the inactivation process was also carried. The inactivation process involved comparing different disinfection models of Chicks-Watson, Collins-Selleck and Homs models. The results from analysis of variance (ANOVA) of the statistical optimization process revealed that only contact time was significant. The optimum disinfection range of the seed extract was 125 mg/L, 30 minutes and 120rpm agitation. At the optimum dose, the inactivation kinetics followed the Collin-Selleck model with coefficient of determination (R2) of 0.6320. This study is the first of its kind in determining the inactivation kinetics of pseudomonas aeruginosa using the defatted seed extract.
Mathematical modeling of a thermovoltaic cell
NASA Technical Reports Server (NTRS)
White, Ralph E.; Kawanami, Makoto
1992-01-01
A new type of battery named 'Vaporvolt' cell is in the early stage of its development. A mathematical model of a CuO/Cu 'Vaporvolt' cell is presented that can be used to predict the potential and the transport behavior of the cell during discharge. A sensitivity analysis of the various transport and electrokinetic parameters indicates which parameters have the most influence on the predicted energy and power density of the 'Vaporvolt' cell. This information can be used to decide which parameters should be optimized or determined more accurately through further modeling or experimental studies. The optimal thicknesses of electrodes and separator, the concentration of the electrolyte, and the current density are determined by maximizing the power density. These parameter sensitivities and optimal design parameter values will help in the development of a better CuO/Cu 'Vaporvolt' cell.
Renewable Energy Resources Portfolio Optimization in the Presence of Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behboodi, Sahand; Chassin, David P.; Crawford, Curran
In this paper we introduce a simple cost model of renewable integration and demand response that can be used to determine the optimal mix of generation and demand response resources. The model includes production cost, demand elasticity, uncertainty costs, capacity expansion costs, retirement and mothballing costs, and wind variability impacts to determine the hourly cost and revenue of electricity delivery. The model is tested on the 2024 planning case for British Columbia and we find that cost is minimized with about 31% renewable generation. We also find that demand responsive does not have a significant impact on cost at themore » hourly level. The results suggest that the optimal level of renewable resource is not sensitive to a carbon tax or demand elasticity, but it is highly sensitive to the renewable resource installation cost.« less
Shen, L; Levine, S H; Catchen, G L
1987-07-01
This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.
Dieter, Cheryl A.; Fleck, William B.
2008-01-01
Potentiometric surfaces in the Piney Point-Nanjemoy, Aquia, and Upper Patapsco aquifers have declined from 1950 through 2000 throughout southern Maryland. In the vicinity of Lexington Park, Maryland, the potentiometric surface in the Aquia aquifer in 2000 was as much as 170 feet below sea level, approximately 150 feet lower than estimated pre-pumping levels before 1940. At the present rate, the water levels will have declined to the regulatory allowable maximum of 80 percent of available drawdown in the Aquia aquifer by about 2050. The effect of the withdrawals from these aquifers by the Naval Air Station Patuxent River and surrounding users on the declining potentiometric surface has raised concern for future availability of ground water. Growth at Naval Air Station Patuxent River may increase withdrawals, resulting in further drawdown. A ground-water-flow model, combined with optimization modeling, was used to develop withdrawal scenarios that minimize the effects (drawdown) of hypothetical future withdrawals. A three-dimensional finite-difference ground-water-flow model was developed to simulate the ground-water-flow system in the Piney Point-Nanjemoy, Aquia, and Upper Patapsco aquifers beneath the Naval Air Station Patuxent River. Transient and steady-state conditions were simulated to give water-resource managers additional tools to manage the ground-water resources. The transient simulation, representing 1900 through 2002, showed that the magnitude of withdrawal has increased over that time, causing ground-water flow to change direction in some areas. The steady-state simulation was linked to an optimization model to determine optimal solutions to hypothetical water-management scenarios. Two optimization scenarios were evaluated. The first scenario was designed to determine the optimal pumping rates for wells screened in the Aquia aquifer within three supply groups to meet a 25-percent increase in withdrawal demands, while minimizing the drawdown at a control location. The resulting optimal solution showed that pumping six wells above the rate required for maintenance produced the least amount of drawdown in the local potentiometric surface. The second hypothetical scenario was designed to determine the optimal location for an additional well in the Aquia aquifer in the northeastern part of the main air station. The additional well was needed to meet an increase in withdrawal of 43,000 cubic feet per day. The optimization model determined the optimal location for the new well, out of a possible 10 locations, while minimizing drawdown at control nodes located outside the western boundary of the main air station. The optimal location is about 1,500 feet to the east-northeast of the existing well.
Zhu, Xiaoning
2014-01-01
Rail mounted gantry crane (RMGC) scheduling is important in reducing makespan of handling operation and improving container handling efficiency. In this paper, we present an RMGC scheduling optimization model, whose objective is to determine an optimization handling sequence in order to minimize RMGC idle load time in handling tasks. An ant colony optimization is proposed to obtain near optimal solutions. Computational experiments on a specific railway container terminal are conducted to illustrate the proposed model and solution algorithm. The results show that the proposed method is effective in reducing the idle load time of RMGC. PMID:25538768
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh
2017-05-01
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
NASA Astrophysics Data System (ADS)
Widhiarso, Wahyu; Rosyidi, Cucuk Nur
2018-02-01
Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.
Hannan, M A; Akhtar, Mahmuda; Begum, R A; Basri, H; Hussain, A; Scavino, Edgar
2018-01-01
Waste collection widely depends on the route optimization problem that involves a large amount of expenditure in terms of capital, labor, and variable operational costs. Thus, the more waste collection route is optimized, the more reduction in different costs and environmental effect will be. This study proposes a modified particle swarm optimization (PSO) algorithm in a capacitated vehicle-routing problem (CVRP) model to determine the best waste collection and route optimization solutions. In this study, threshold waste level (TWL) and scheduling concepts are applied in the PSO-based CVRP model under different datasets. The obtained results from different datasets show that the proposed algorithmic CVRP model provides the best waste collection and route optimization in terms of travel distance, total waste, waste collection efficiency, and tightness at 70-75% of TWL. The obtained results for 1 week scheduling show that 70% of TWL performs better than all node consideration in terms of collected waste, distance, tightness, efficiency, fuel consumption, and cost. The proposed optimized model can serve as a valuable tool for waste collection and route optimization toward reducing socioeconomic and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Surveillance versus Reconnaissance: An Entropy Based Model
2012-03-22
sensor detection since no new information is received. (Berry, Pontecorvo, & Fogg , Optimal Search, Location and Tracking of Surface Maritime Targets by...by Berry, Pontecorvo and Fogg (Berry, Pontecorvo, & Fogg , July, 2003) facilitates the optimal solutions to dynamically determining the allocation and...region (Berry, Pontecorvo, & Fogg , July, 2003). Phase II: Locate During the locate phase, the objective was to determine the location of the targets
An optimal control model approach to the design of compensators for simulator delay
NASA Technical Reports Server (NTRS)
Baron, S.; Lancraft, R.; Caglayan, A.
1982-01-01
The effects of display delay on pilot performance and workload and of the design of the filters to ameliorate these effects were investigated. The optimal control model for pilot/vehicle analysis was used both to determine the potential delay effects and to design the compensators. The model was applied to a simple roll tracking task and to a complex hover task. The results confirm that even small delays can degrade performance and impose a workload penalty. A time-domain compensator designed by using the optimal control model directly appears capable of providing extensive compensation for these effects even in multi-input, multi-output problems.
Mixture optimization for mixed gas Joule-Thomson cycle
NASA Astrophysics Data System (ADS)
Detlor, J.; Pfotenhauer, J.; Nellis, G.
2017-12-01
An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.
NASA Astrophysics Data System (ADS)
Kristiana, S. P. D.
2017-12-01
Corporate chain store is one type of retail industries companies that are developing growing rapidly in Indonesia. The competition between retail companies is very tight, so retailer companies should evaluate its performance continuously in order to survive. The selling price of products is one of the essential attributes and gets attention of many consumers where it’s used to evaluate the performance of the industry. This research aimed to determine optimal selling price of product with considering cost factors, namely purchase price of the product from supplier, holding costs, and transportation costs. Fuzzy logic approach is used in data processing with MATLAB software. Fuzzy logic is selected to solve the problem because this method can consider complexities factors. The result is a model of determination of the optimal selling price by considering three cost factors as inputs in the model. Calculating MAPE and model prediction ability for some products are used as validation and verification where the average value is 0.0525 for MAPE and 94.75% for prediction ability. The conclusion is this model can predict the selling price of up to 94.75%, so it can be used as tools for the corporate chain store in particular to determine the optimal selling price for its products.
NASA Technical Reports Server (NTRS)
Esparza, V.
1975-01-01
Experimental aerodynamic investigations were conducted in the Arnold Engineering Development Center (AEDC) Von Karman Facility Tunnel A on a scale model of the space shuttle orbiter. The objectives of this test were: (1) determine supersonic differential elevon/aileron lateral control optimization, (2) determine supersonic elevon hinge moments, (3) determine the supersonic effects of the new baseline 6-inch elevon/elevon and elevon/fuselage gaps, and 4) determine the supersonic effects of the new short (VL70-008410) OMS pods. Six-component aerodynamic force, moment, and elevon hinge moment data were recorded.
NASA Astrophysics Data System (ADS)
Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei
2018-03-01
Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Eggimann, Sven; Truffer, Bernhard; Maurer, Max
2015-11-01
The strong reliance of most utility services on centralised network infrastructures is becoming increasingly challenged by new technological advances in decentralised alternatives. However, not enough effort has been made to develop planning tools designed to address the implications of these new opportunities and to determine the optimal degree of centralisation of these infrastructures. We introduce a planning tool for sustainable network infrastructure planning (SNIP), a two-step techno-economic heuristic modelling approach based on shortest path-finding and hierarchical-agglomerative clustering algorithms to determine the optimal degree of centralisation in the field of wastewater management. This SNIP model optimises the distribution of wastewater treatment plants and the sewer network outlay relative to several cost and sewer-design parameters. Moreover, it allows us to construct alternative optimal wastewater system designs taking into account topography, economies of scale as well as the full size range of wastewater treatment plants. We quantify and confirm that the optimal degree of centralisation decreases with increasing terrain complexity and settlement dispersion while showing that the effect of the latter exceeds that of topography. Case study results for a Swiss community indicate that the calculated optimal degree of centralisation is substantially lower than the current level. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
Enhanced index tracking modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. S.; Hj. Jaaman, Saiful Hafizah; Ismail, Hamizun bin
2013-09-01
Enhanced index tracking is a popular form of passive fund management in stock market. It is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the risk. Enhanced index tracking aims to generate excess return over the return achieved by the index without purchasing all of the stocks that make up the index by establishing an optimal portfolio. The objective of this study is to determine the optimal portfolio composition and performance by using weighted model in enhanced index tracking. Weighted model focuses on the trade-off between the excess return and the risk. The results of this study show that the optimal portfolio for the weighted model is able to outperform the Malaysia market index which is Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.
A rational eating model of binges, diets and obesity.
Dragone, Davide
2009-07-01
This paper addresses the rapid diffusion of obesity and the existence of different individual patterns of food consumption between non-dieters and chronic dieters. I propose a rational eating model where a forward-looking agent optimizes the intertemporal satisfaction from eating, taking into account the cost of changing consumption habits and the negative health consequences of having a non-optimal body weight. Consistent with the evidence, I show that the intertemporal maximization problem leads to a condition of overweightness, and that heterogeneity in the individual relevance of habits in consumption can determine the observed differences in the individual intertemporal patterns of food consumption and body weight. Sufficient conditions for determining when the convergence to the steady state implies oscillations or is monotonic are given. In the former case, the agent optimally alternates diets and binges until the steady state is reached, in the latter a regular intertemporal pattern of food consumption is optimal.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Optimal inventories for overhaul of repairable redundant systems - A Markov decision model
NASA Technical Reports Server (NTRS)
Schaefer, M. K.
1984-01-01
A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.
Solving the optimal attention allocation problem in manual control
NASA Technical Reports Server (NTRS)
Kleinman, D. L.
1976-01-01
Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.
Using Biomechanical Optimization To Interpret Dancers’ Pose Selection For A Partnered Spin
2009-05-06
optimized performance of a straight arm backward longswing on the still rings in mens artistic gymnastics . Because gymnasts lose points for excessive swing at...an actual performance and used that as the basis for their search. Yeadon determined that with timing within 15ms, gymnasts can minimize their excess...are moving in an optimal way. 2.5 Body Modeling 2.5.1 Building the Body In his study involving gymnasts on the rings, Yeadon developed a body model com
NASA Astrophysics Data System (ADS)
Vasil'ev, E. N.
2017-09-01
A mathematical model has been proposed for analyzing and optimizing thermoelectric cooling regimes for heat-loaded elements of engineering and electronic devices. The model based on analytic relations employs the working characteristics of thermoelectric modules as the initial data and makes it possible to determine the temperature regime and the optimal values of the feed current for the modules taking into account the thermal resistance of the heat-spreading system.
Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.
2010-01-01
A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
Ruys, Andrew J.
2018-01-01
Electrospun fibres have gained broad interest in biomedical applications, including tissue engineering scaffolds, due to their potential in mimicking extracellular matrix and producing structures favourable for cell and tissue growth. The development of scaffolds often involves multivariate production parameters and multiple output characteristics to define product quality. In this study on electrospinning of polycaprolactone (PCL), response surface methodology (RSM) was applied to investigate the determining parameters and find optimal settings to achieve the desired properties of fibrous scaffold for acetabular labrum implant. The results showed that solution concentration influenced fibre diameter, while elastic modulus was determined by solution concentration, flow rate, temperature, collector rotation speed, and interaction between concentration and temperature. Relationships between these variables and outputs were modelled, followed by an optimization procedure. Using the optimized setting (solution concentration of 10% w/v, flow rate of 4.5 mL/h, temperature of 45 °C, and collector rotation speed of 1500 RPM), a target elastic modulus of 25 MPa could be achieved at a minimum possible fibre diameter (1.39 ± 0.20 µm). This work demonstrated that multivariate factors of production parameters and multiple responses can be investigated, modelled, and optimized using RSM. PMID:29562614
Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud
2014-01-01
During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636
The optimal inventory policy for EPQ model under trade credit
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2010-09-01
Huang and Huang [(2008), 'Optimal Inventory Replenishment Policy for the EPQ Model Under Trade Credit without Derivatives International Journal of Systems Science, 39, 539-546] use the algebraic method to determine the optimal inventory replenishment policy for the retailer in the extended model under trade credit. However, the algebraic method has its limit of application such that validities of proofs of Theorems 1-4 in Huang and Huang (2008) are questionable. The main purpose of this article is not only to indicate shortcomings but also to present the accurate proofs for Huang and Huang (2008).
Bifurcation Analysis and Optimal Harvesting of a Delayed Predator-Prey Model
NASA Astrophysics Data System (ADS)
Tchinda Mouofo, P.; Djidjou Demasse, R.; Tewa, J. J.; Aziz-Alaoui, M. A.
A delay predator-prey model is formulated with continuous threshold prey harvesting and Holling response function of type III. Global qualitative and bifurcation analyses are combined to determine the global dynamics of the model. The positive invariance of the non-negative orthant is proved and the uniform boundedness of the trajectories. Stability of equilibria is investigated and the existence of some local bifurcations is established: saddle-node bifurcation, Hopf bifurcation. We use optimal control theory to provide the correct approach to natural resource management. Results are also obtained for optimal harvesting. Numerical simulations are given to illustrate the results.
NASA Astrophysics Data System (ADS)
Pradanti, Paskalia; Hartono
2018-03-01
Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.
Determination of optimal self-drive tourism route using the orienteering problem method
NASA Astrophysics Data System (ADS)
Hashim, Zakiah; Ismail, Wan Rosmanira; Ahmad, Norfaieqah
2013-04-01
This paper was conducted to determine the optimal travel routes for self-drive tourism based on the allocation of time and expense by maximizing the amount of attraction scores assigned to each city involved. Self-drive tourism represents a type of tourism where tourists hire or travel by their own vehicle. It only involves a tourist destination which can be linked with a network of roads. Normally, the traveling salesman problem (TSP) and multiple traveling salesman problems (MTSP) method were used in the minimization problem such as determination the shortest time or distance traveled. This paper involved an alternative approach for maximization method which is maximize the attraction scores and tested on tourism data for ten cities in Kedah. A set of priority scores are used to set the attraction score at each city. The classical approach of the orienteering problem was used to determine the optimal travel route. This approach is extended to the team orienteering problem and the two methods were compared. These two models have been solved by using LINGO12.0 software. The results indicate that the model involving the team orienteering problem provides a more appropriate solution compared to the orienteering problem model.
Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel
2011-02-01
The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.
Musci, Marilena; Yao, Shicong
2017-12-01
Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.
Optimal Charging of Nickel-Hydrogen Batteries for Life Extension
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Lorenzo, Carl F.
2002-01-01
We are exploring the possibility of extending the cycle life of battery systems by using a charging profile that minimizes cell damage. Only nickel-hydrogen cells are discussed at this time, but applications to lithium-ion cells are being considered. The process first requires the development of a fractional calculus based nonlinear dynamic model of the specific cells being used. The parameters of this model are determined from the cell transient responses. To extend cell cycle life, an instantaneous damage rate model is developed. The model is based on cycle life data and is highly dependent on cell voltage. Once both the cell dynamic model and the instantaneous damage rate model have been determined, the charging profile for a specific cell is determined by numerical optimization. Results concerning the percentage life extension for different charging strategies are presented. The overall procedure is readily adaptable to real-time implementations where the charging profile can maintain its minimum damage nature as the specific cell ages.
Pricing policy for declining demand using item preservation technology.
Khedlekar, Uttam Kumar; Shukla, Diwakar; Namdeo, Anubhav
2016-01-01
We have designed an inventory model for seasonal products in which deterioration can be controlled by item preservation technology investment. Demand for the product is considered price sensitive and decreases linearly. This study has shown that the profit is a concave function of optimal selling price, replenishment time and preservation cost parameter. We simultaneously determined the optimal selling price of the product, the replenishment cycle and the cost of item preservation technology. Additionally, this study has shown that there exists an optimal selling price and optimal preservation investment to maximize the profit for every business set-up. Finally, the model is illustrated by numerical examples and sensitive analysis of the optimal solution with respect to major parameters.
Eddy, Sean R.
2008-01-01
Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236
NASA Astrophysics Data System (ADS)
Chiadamrong, N.; Piyathanavong, V.
2017-12-01
Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.
Xu, Ke; Butlin, Mark; Avolio, Alberto P
2012-01-01
Timing of biventricular pacing devices employed in cardiac resynchronization therapy (CRT) is a critical determinant of efficacy of the procedure. Optimization is done by maximizing function in terms of arterial pressure (BP) or cardiac output (CO). However, BP and CO are also determined by the hemodynamic load of the pulmonary and systemic vasculature. This study aims to use a lumped parameter circulatory model to assess the influence of the arterial load on the atrio-ventricular (AV) and inter-ventricular (VV) delay for optimal CRT performance.
NASA Technical Reports Server (NTRS)
Hotchkiss, G. B.; Burmeister, L. C.; Bishop, K. A.
1980-01-01
A discrete-gradient optimization algorithm is used to identify the parameters in a one-node and a two-node capacitance model of a flat-plate collector. Collector parameters are first obtained by a linear-least-squares fit to steady state data. These parameters, together with the collector heat capacitances, are then determined from unsteady data by use of the discrete-gradient optimization algorithm with less than 10 percent deviation from the steady state determination. All data were obtained in the indoor solar simulator at the NASA Lewis Research Center.
Modeling joint restoration strategies for interdependent infrastructure systems
Simonovic, Slobodan P.
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300
An intelligent emissions controller for fuel lean gas reburn in coal-fired power plants.
Reifman, J; Feldman, E E; Wei, T Y; Glickert, R W
2000-02-01
The application of artificial intelligence techniques for performance optimization of the fuel lean gas reburn (FLGR) system is investigated. A multilayer, feedforward artificial neural network is applied to model static nonlinear relationships between the distribution of injected natural gas into the upper region of the furnace of a coal-fired boiler and the corresponding oxides of nitrogen (NOx) emissions exiting the furnace. Based on this model, optimal distributions of injected gas are determined such that the largest NOx reduction is achieved for each value of total injected gas. This optimization is accomplished through the development of a new optimization method based on neural networks. This new optimal control algorithm, which can be used as an alternative generic tool for solving multidimensional nonlinear constrained optimization problems, is described and its results are successfully validated against an off-the-shelf tool for solving mathematical programming problems. Encouraging results obtained using plant data from one of Commonwealth Edison's coal-fired electric power plants demonstrate the feasibility of the overall approach. Preliminary results show that the use of this intelligent controller will also enable the determination of the most cost-effective operating conditions of the FLGR system by considering, along with the optimal distribution of the injected gas, the cost differential between natural gas and coal and the open-market price of NOx emission credits. Further study, however, is necessary, including the construction of a more comprehensive database, needed to develop high-fidelity process models and to add carbon monoxide (CO) emissions to the model of the gas reburn system.
An integrated 3D log processing optimization system for small sawmills in central Appalachia
Wenshu Lin; Jingxin Wang
2013-01-01
An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...
Maureen C. Kennedy; E. David Ford; Thomas M. Hinckley
2009-01-01
Many hypotheses have been advanced about factors that control tree longevity. We use a simulation model with multi-criteria optimization and Pareto optimality to determine branch morphologies in the Pinaceae that minimize the effect of growth limitations due to water stress while simultaneously maximizing carbohydrate gain. Two distinct branch morphologies in the...
Singh, Kunwar P; Rai, Premanjali; Pandey, Priyanka; Sinha, Sarita
2012-01-01
The present research aims to investigate the individual and interactive effects of chlorine dose/dissolved organic carbon ratio, pH, temperature, bromide concentration, and reaction time on trihalomethanes (THMs) formation in surface water (a drinking water source) during disinfection by chlorination in a prototype laboratory-scale simulation and to develop a model for the prediction and optimization of THMs levels in chlorinated water for their effective control. A five-factor Box-Behnken experimental design combined with response surface and optimization modeling was used for predicting the THMs levels in chlorinated water. The adequacy of the selected model and statistical significance of the regression coefficients, independent variables, and their interactions were tested by the analysis of variance and t test statistics. The THMs levels predicted by the model were very close to the experimental values (R(2) = 0.95). Optimization modeling predicted maximum (192 μg/l) TMHs formation (highest risk) level in water during chlorination was very close to the experimental value (186.8 ± 1.72 μg/l) determined in laboratory experiments. The pH of water followed by reaction time and temperature were the most significant factors that affect the THMs formation during chlorination. The developed model can be used to determine the optimum characteristics of raw water and chlorination conditions for maintaining the THMs levels within the safe limit.
Bódalo, A; Gómez, J L.; Gómez, E; Bastida, J; Máximo, M F.; Montiel, M C.
2001-03-08
In this paper the possibility of continuous resolution of DL-phenylalanine, catalyzed by L-aminoacylase in a ultrafiltration membrane reactor (UFMR) is presented. A simple design model, based on previous kinetic studies, has been demonstrated to be capable of describing the behavior of the experimental system. The model has been used to determine the optimal experimental conditions to carry out the asymmetrical hydrolysis of N-acetyl-DL-phenylalanine.
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection
Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi
2011-01-01
The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237
Barlow, P.M.; Wagner, B.J.; Belitz, K.
1996-01-01
The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Optimal short-range trajectories for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, G.L.; Erzberger, H.
1982-12-01
An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less
Optimal investment in a portfolio of HIV prevention programs.
Zaric, G S; Brandeau, M L
2001-01-01
In this article, the authors determine the optimal allocation of HIV prevention funds and investigate the impact of different allocation methods on health outcomes. The authors present a resource allocation model that can be used to determine the allocation of HIV prevention funds that maximizes quality-adjusted life years (or life years) gained or HIV infections averted in a population over a specified time horizon. They apply the model to determine the allocation of a limited budget among 3 types of HIV prevention programs in a population of injection drug users and nonusers: needle exchange programs, methadone maintenance treatment, and condom availability programs. For each prevention program, the authors estimate a production function that relates the amount invested to the associated change in risky behavior. The authors determine the optimal allocation of funds for both objective functions for a high-prevalence population and a low-prevalence population. They also consider the allocation of funds under several common rules of thumb that are used to allocate HIV prevention resources. It is shown that simpler allocation methods (e.g., allocation based on HIV incidence or notions of equity among population groups) may lead to alloctions that do not yield the maximum health benefit. The optimal allocation of HIV prevention funds in a population depends on HIV prevalence and incidence, the objective function, the production functions for the prevention programs, and other factors. Consideration of cost, equity, and social and political norms may be important when allocating HIV prevention funds. The model presented in this article can help decision makers determine the health consequences of different allocations of funds.
Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey Dewayne
2004-01-01
The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.
Optimal manpower allocation in aircraft line maintenance (Case in GMF AeroAsia)
NASA Astrophysics Data System (ADS)
Puteri, V. E.; Yuniaristanto, Hisjam, M.
2017-11-01
This paper presents a mathematical modeling to find the optimal manpower allocation in an aircraft line maintenance. This research focuses on assigning the number and type of manpower that allocated to each service. This study considers the licenced worker or Aircraft Maintenance Engineer Licence (AMEL) and non licenced worker or Aircraft Maintenance Technician (AMT). In this paper, we also consider the relationship of each station in terms of the possibility to transfer the manpower among them. The optimization model considers the number of manpowers needed for each service and the requirement of AMEL worker. This paper aims to determine the optimal manpower allocation using the mathematical modeling. The objective function of the model is to find the minimum employee expenses. The model was solved using the ILOG CPLEX software. The results show that the manpower allocation can meet the manpower need and the all load can be served.
Optimization of wastewater treatment plant operation for greenhouse gas mitigation.
Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C
2015-11-01
This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Method of experimental and calculation determination of dissipative properties of carbon
NASA Astrophysics Data System (ADS)
Kazakova, Olga I.; Smolin, Igor Yu.; Bezmozgiy, Iosif M.
2017-12-01
This paper describes the process of definition of relations between the damping ratio and strain/state levels in a material. For these purposes, the experimental-calculation approach was applied. The experimental research was performed on plane composite specimens. The tests were accompanied by finite element modeling using the ANSYS software. Optimization was used as a tool for FEM property setting and for finding the above-mentioned relations. A difference between the calculation and experimental results was accepted as objective functions of this optimization. The optimization cycle was implemented using the pSeven DATADVANCE software platform. The developed approach makes it possible to determine the relations between the damping ratio and strain/state levels in the material, which can be used for computer modeling of the structure response under dynamic loading.
Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System
NASA Astrophysics Data System (ADS)
Huang, Long; Feng, Xiao; Chu, Khim H.
2010-11-01
Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.
Optimization of cell seeding in a 2D bio-scaffold system using computational models.
Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong
2017-05-01
The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Knypiński, Łukasz
2017-12-01
In this paper an algorithm for the optimization of excitation system of line-start permanent magnet synchronous motors will be presented. For the basis of this algorithm, software was developed in the Borland Delphi environment. The software consists of two independent modules: an optimization solver, and a module including the mathematical model of a synchronous motor with a self-start ability. The optimization module contains the bat algorithm procedure. The mathematical model of the motor has been developed in an Ansys Maxwell environment. In order to determine the functional parameters of the motor, additional scripts in Visual Basic language were developed. Selected results of the optimization calculation are presented and compared with results for the particle swarm optimization algorithm.
Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
Li, Zhe-Xuan; Huang, Lei-Lei; Liu, Cong; Formichella, Luca; Zhang, Yang; Wang, Yu-Mei; Zhang, Lian; Ma, Jun-Ling; Liu, Wei-Dong; Ulm, Kurt; Wang, Jian-Xi; Zhang, Lei; Bajbouj, Monther; Li, Ming; Vieth, Michael; Quante, Michael; Zhou, Tong; Wang, Le-Hua; Suchanek, Stepan; Soutschek, Erwin; Schmid, Roland; Classen, Meinhard; You, Wei-Cheng; Gerhard, Markus; Pan, Kai-Feng
2017-05-18
The performance of diagnostic tests in intervention trials of Helicobacter pylori (H.pylori) eradication is crucial, since even minor inaccuracies can have major impact. To determine the cut-off point for 13 C-urea breath test ( 13 C-UBT) and to assess if it can be further optimized by serologic testing, mathematic modeling, histopathology and serologic validation were applied. A finite mixture model (FMM) was developed in 21,857 subjects, and an independent validation by modified Giemsa staining was conducted in 300 selected subjects. H.pylori status was determined using recomLine H.pylori assay in 2,113 subjects with a borderline 13 C-UBT results. The delta over baseline-value (DOB) of 3.8 was an optimal cut-off point by a FMM in modelling dataset, which was further validated as the most appropriate cut-off point by Giemsa staining (sensitivity = 94.53%, specificity = 92.93%). In the borderline population, 1,468 subjects were determined as H.pylori positive by recomLine (69.5%). A significant correlation between the number of positive H.pylori serum responses and DOB value was found (r s = 0.217, P < 0.001). A mathematical approach such as FMM might be an alternative measure in optimizing the cut-off point for 13 C-UBT in community-based studies, and a second method to determine H.pylori status for subjects with borderline value of 13 C-UBT was necessary and recommended.
Multicomponent Therapeutics of Berberine Alkaloids
Luo, Jiaoyang; Yan, Dan; Yang, Meihua; Dong, Xiaoping; Xiao, Xiaohe
2013-01-01
Although berberine alkaloids (BAs) are reported to be with broad-spectrum antibacterial and antiviral activities, the interactions among BAs have not been elucidated. In the present study, methicillin-resistant Staphylococcus aureus (MRSA) was chosen as a model organism, and modified broth microdilution was applied for the determination of the fluorescence absorption values to calculate the anti-MRSA activity of BAs. We have initiated four steps to seek the optimal combination of BAs that are (1) determining the anti-MRSA activity of single BA, (2) investigating the two-component combination to clarify the interactions among BAs by checkerboard assay, (3) investigating the multicomponent combination to determine the optimal ratio by quadratic rotation-orthogonal combination design, and (4) in vivo and in vitro validation of the optimal combination. The results showed that the interactions among BAs are related to their concentrations. The synergetic combinations included “berberine and epiberberine,” “jatrorrhizine and palmatine” and “jatrorrhizine and coptisine”; the antagonistic combinations included “coptisine and epiberberine”. The optimal combination was berberine : coptisine : jatrorrhizine : palmatine : epiberberine = 0.702 : 0.863 : 1 : 0.491 : 0.526, and the potency of the optimal combination on cyclophosphamide-immunocompromised mouse model was better than the natural combinations of herbs containing BAs. PMID:23634170
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...
2016-01-01
This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less
NASA Astrophysics Data System (ADS)
Wu, Shanhua; Yang, Zhongzhen
2018-07-01
This paper aims to optimize the locations of manufacturing industries in the context of economic globalization by proposing a bi-level programming model which integrates the location optimization model with the traffic assignment model. In the model, the transport network is divided into the subnetworks of raw materials and products respectively. The upper-level model is used to determine the location of industries and the OD matrices of raw materials and products. The lower-level model is used to calculate the attributes of traffic flow under given OD matrices. To solve the model, the genetic algorithm is designed. The proposed method is tested using the Chinese steel industry as an example. The result indicates that the proposed method could help the decision-makers to implement the location decisions for the manufacturing industries effectively.
A flexible, interactive software tool for fitting the parameters of neuronal models.
Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.
A flexible, interactive software tool for fitting the parameters of neuronal models
Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID:25071540
Demand side management in recycling and electricity retail pricing
NASA Astrophysics Data System (ADS)
Kazan, Osman
This dissertation addresses several problems from the recycling industry and electricity retail market. The first paper addresses a real-life scheduling problem faced by a national industrial recycling company. Based on their practices, a scheduling problem is defined, modeled, analyzed, and a solution is approximated efficiently. The recommended application is tested on the real-life data and randomly generated data. The scheduling improvements and the financial benefits are presented. The second problem is from electricity retail market. There are well-known patterns in daily usage in hours. These patterns change in shape and magnitude by seasons and days of the week. Generation costs are multiple times higher during the peak hours of the day. Yet most consumers purchase electricity at flat rates. This work explores analytic pricing tools to reduce peak load electricity demand for retailers. For that purpose, a nonlinear model that determines optimal hourly prices is established based on two major components: unit generation costs and consumers' utility. Both are analyzed and estimated empirically in the third paper. A pricing model is introduced to maximize the electric retailer's profit. As a result, a closed-form expression for the optimal price vector is obtained. Possible scenarios are evaluated for consumers' utility distribution. For the general case, we provide a numerical solution methodology to obtain the optimal pricing scheme. The models recommended are tested under various scenarios that consider consumer segmentation and multiple pricing policies. The recommended model reduces the peak load significantly in most cases. Several utility companies offer hourly pricing to their customers. They determine prices using historical data of unit electricity cost over time. In this dissertation we develop a nonlinear model that determines optimal hourly prices with parameter estimation. The last paper includes a regression analysis of the unit generation cost function obtained from Independent Service Operators. A consumer experiment is established to replicate the peak load behavior. As a result, consumers' utility function is estimated and optimal retail electricity prices are computed.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
A risk-based multi-objective model for optimal placement of sensors in water distribution system
NASA Astrophysics Data System (ADS)
Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein
2018-02-01
In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.
Geoffrey H. Donovan
2006-01-01
Federal land management agencies in the United States are increasingly relying on contract crews as opposed to agency fire crews. Despite this increasing reliance on contractors, there have been no studies to determine what the optimal mix of contract and agency fire crews should be. A mathematical model is presented to address this question and is applied to a case...
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
Optimal control in a model of malaria with differential susceptibility
NASA Astrophysics Data System (ADS)
Hincapié, Doracelly; Ospina, Juan
2014-06-01
A malaria model with differential susceptibility is analyzed using the optimal control technique. In the model the human population is classified as susceptible, infected and recovered. Susceptibility is assumed dependent on genetic, physiological, or social characteristics that vary between individuals. The model is described by a system of differential equations that relate the human and vector populations, so that the infection is transmitted to humans by vectors, and the infection is transmitted to vectors by humans. The model considered is analyzed using the optimal control method when the control consists in using of insecticide-treated nets and educational campaigns; and the optimality criterion is to minimize the number of infected humans, while keeping the cost as low as is possible. One first goal is to determine the effects of differential susceptibility in the proposed control mechanism; and the second goal is to determine the algebraic form of the basic reproductive number of the model. All computations are performed using computer algebra, specifically Maple. It is claimed that the analytical results obtained are important for the design and implementation of control measures for malaria. It is suggested some future investigations such as the application of the method to other vector-borne diseases such as dengue or yellow fever; and also it is suggested the possible application of free software of computer algebra like Maxima.
Kuu, Wei Y; Nail, Steven L
2009-09-01
Computer programs in FORTRAN were developed to rapidly determine the optimal shelf temperature, T(f), and chamber pressure, P(c), to achieve the shortest primary drying time. The constraint for the optimization is to ensure that the product temperature profile, T(b), is below the target temperature, T(target). Five percent mannitol was chosen as the model formulation. After obtaining the optimal sets of T(f) and P(c), each cycle was assigned with a cycle rank number in terms of the length of drying time. Further optimization was achieved by dividing the drying time into a series of ramping steps for T(f), in a cascading manner (termed the cascading T(f) cycle), to further shorten the cycle time. For the purpose of demonstrating the validity of the optimized T(f) and P(c), four cycles with different predicted lengths of drying time, along with the cascading T(f) cycle, were chosen for experimental cycle runs. Tunable diode laser absorption spectroscopy (TDLAS) was used to continuously measure the sublimation rate. As predicted, maximum product temperatures were controlled slightly below the target temperature of -25 degrees C, and the cascading T(f)-ramping cycle is the most efficient cycle design. In addition, the experimental cycle rank order closely matches with that determined by modeling.
Evaluating and minimizing noise impact due to aircraft flyover
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Cook, G.
1979-01-01
Existing techniques were used to assess the noise impact on a community due to aircraft operation and to optimize the flight paths of an approaching aircraft with respect to the annoyance produced. Major achievements are: (1) the development of a population model suitable for determining the noise impact, (2) generation of a numerical computer code which uses this population model along with the steepest descent algorithm to optimize approach/landing trajectories, (3) implementation of this optimization code in several fictitious cases as well as for the community surrounding Patrick Henry International Airport, Virginia.
NASA Astrophysics Data System (ADS)
Ghulam Saber, Md; Arif Shahriar, Kh; Ahmed, Ashik; Hasan Sagor, Rakibul
2016-10-01
Particle swarm optimization (PSO) and invasive weed optimization (IWO) algorithms are used for extracting the modeling parameters of materials useful for optics and photonics research community. These two bio-inspired algorithms are used here for the first time in this particular field to the best of our knowledge. The algorithms are used for modeling graphene oxide and the performances of the two are compared. Two objective functions are used for different boundary values. Root mean square (RMS) deviation is determined and compared.
Optimal control on bladder cancer growth model with BCG immunotherapy and chemotherapy
NASA Astrophysics Data System (ADS)
Dewi, C.; Trisilowati
2015-03-01
In this paper, an optimal control model of the growth of bladder cancer with BCG (Basil Calmate Guerin) immunotherapy and chemotherapy is discussed. The purpose of this optimal control is to determine the number of BCG vaccine and drug should be given during treatment such that the growth of bladder cancer cells can be suppressed. Optimal control is obtained by applying Pontryagin principle. Furthermore, the optimal control problem is solved numerically using Forward-Backward Sweep method. Numerical simulations show the effectiveness of the vaccine and drug in controlling the growth of cancer cells. Hence, it can reduce the number of cancer cells that is not infected with BCG as well as minimize the cost of the treatment.
Optimization of hydrometric monitoring network in urban drainage systems using information theory.
Yazdi, J
2017-10-01
Regular and continuous monitoring of urban runoff in both quality and quantity aspects is of great importance for controlling and managing surface runoff. Due to the considerable costs of establishing new gauges, optimization of the monitoring network is essential. This research proposes an approach for site selection of new discharge stations in urban areas, based on entropy theory in conjunction with multi-objective optimization tools and numerical models. The modeling framework provides an optimal trade-off between the maximum possible information content and the minimum shared information among stations. This approach was applied to the main surface-water collection system in Tehran to determine new optimal monitoring points under the cost considerations. Experimental results on this drainage network show that the obtained cost-effective designs noticeably outperform the consulting engineers' proposal in terms of both information contents and shared information. The research also determined the highly frequent sites at the Pareto front which might be important for decision makers to give a priority for gauge installation on those locations of the network.
A guided search genetic algorithm using mined rules for optimal affective product design
NASA Astrophysics Data System (ADS)
Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.
2014-08-01
Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.
NASA Astrophysics Data System (ADS)
Shorikov, A. F.; Butsenko, E. V.
2017-10-01
This paper discusses the problem of multicriterial adaptive optimization the control of investment projects in the presence of several technologies. On the basis of network modeling proposed a new economic and mathematical model and a method for solving the problem of multicriterial adaptive optimization the control of investment projects in the presence of several technologies. Network economic and mathematical modeling allows you to determine the optimal time and calendar schedule for the implementation of the investment project and serves as an instrument to increase the economic potential and competitiveness of the enterprise. On a meaningful practical example, the processes of forming network models are shown, including the definition of the sequence of actions of a particular investment projecting process, the network-based work schedules are constructed. The calculation of the parameters of network models is carried out. Optimal (critical) paths have been formed and the optimal time for implementing the chosen technologies of the investment project has been calculated. It also shows the selection of the optimal technology from a set of possible technologies for project implementation, taking into account the time and cost of the work. The proposed model and method for solving the problem of managing investment projects can serve as a basis for the development, creation and application of appropriate computer information systems to support the adoption of managerial decisions by business people.
Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers
NASA Astrophysics Data System (ADS)
Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.
A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.
Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong
2014-10-01
In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.
A system-level cost-of-energy wind farm layout optimization with landowner modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Le; MacDonald, Erin
This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under twomore » land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.« less
NASA Technical Reports Server (NTRS)
Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.
1992-01-01
A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.
NASA Astrophysics Data System (ADS)
Mahata, Puspita; Mahata, Gour Chandra; Kumar De, Sujit
2018-03-01
Traditional supply chain inventory modes with trade credit usually only assumed that the up-stream suppliers offered the down-stream retailers a fixed credit period. However, in practice the retailers will also provide a credit period to customers to promote the market competition. In this paper, we formulate an optimal supply chain inventory model under two levels of trade credit policy with default risk consideration. Here, the demand is assumed to be credit-sensitive and increasing function of time. The major objective is to determine the retailer's optimal credit period and cycle time such that the total profit per unit time is maximized. The existence and uniqueness of the optimal solution to the presented model are examined, and an easy method is also shown to find the optimal inventory policies of the considered problem. Finally, numerical examples and sensitive analysis are presented to illustrate the developed model and to provide some managerial insights.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Gorelick, Steven M.
2005-03-01
We propose a framework that combines simulation optimization with Bayesian decision analysis to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas. A stochastic simulation optimization management model is employed to plan regionally distributed groundwater pumping while preserving the hydroecological balance in wetland areas. Because predictions made by an aquifer model are uncertain, groundwater supply systems operate below maximum yield. Collecting data from the groundwater system can potentially reduce predictive uncertainty and increase safe water production. The price paid for improvement in water management is the cost of collecting the additional data. Efficient data collection using Bayesian decision analysis proceeds in three stages: (1) The prior analysis determines the optimal pumping scheme and profit from water sales on the basis of known information. (2) The preposterior analysis estimates the optimal measurement locations and evaluates whether each sequential measurement will be cost-effective before it is taken. (3) The posterior analysis then revises the prior optimal pumping scheme and consequent profit, given the new information. Stochastic simulation optimization employing a multiple-realization approach is used to determine the optimal pumping scheme in each of the three stages. The cost of new data must not exceed the expected increase in benefit obtained in optimal groundwater exploitation. An example based on groundwater management practices in Florida aimed at wetland protection showed that the cost of data collection more than paid for itself by enabling a safe and reliable increase in production.
Analysis of EnergyPlus for use in residential building energy optimization
NASA Astrophysics Data System (ADS)
Spencer, Justin S.
This work explored the utility of EnergyPlus as a simulation engine for doing residential building energy optimization, with the objective of finding the modeling areas that require further development in EnergyPlus for residential optimization applications. This work was conducted primarily during 2006-2007, with publication occurring later in 2010. The assessments and recommendations apply to the simulation tool versions available in 2007. During this work, an EnergyPlus v2.0 (2007) input file generator was developed for use in BEopt 0.8.0.4 (2007). BEopt 0.8.0.4 is a residential Building Energy optimization program developed at the National Renewable Energy Laboratory in Golden, Colorado. Residential modeling capabilities of EnergyPlus v2.0 were scrutinized and tested. Modeling deficiencies were identified in a number of areas. These deficiencies were compared to deficiencies in the DOE2.2 V44E4(2007)/TRNSYS simulation engines. The highest priority gaps in EnergyPlus v2.0's residential modeling capability are in infiltration, duct leakage, and foundation modeling. Optimization results from DOE2.2 V44E4 and EnergyPlus v2.0 were analyzed to search for modeling differences that have a significant impact on optimization results. Optimal buildings at different energy savings levels were compared to look for biases. It was discovered that the EnergyPlus v2.0 optimizations consistently chose higher wall insulation levels than the DOE2.2 V44E4 optimizations. The points composing the optimal paths chosen by DOE2.2 V44E4 and EnergyPlus v2.0 were compared to look for points chosen by one optimization that were significantly different from the other optimal path. These outliers were compared to consensus optimal points to determine the simulation differences that cause disparities in the optimization results. The differences were primarily caused by modeling of window radiation exchange and HVAC autosizing.
Is there a trade-off between longevity and quality of life in Grossman's pure investment model?
Eisenring, C
2000-12-01
The question is posed whether an individual maximizes lifetime or trades off longevity for quality of life in Grossman's pure investment (PI)-model. It is shown that the answer critically hinges on the assumed production function for healthy time. If the production function for healthy time produces a trade-off between life-span and quality of life, one has to solve a sequence of fixed time problems. The one offering maximal intertemporal utility determines optimal longevity. Comparative static results of optimal longevity for a simplified version of the PI-model are derived. The obtained results predict that higher initial endowments of wealth and health, a rise in the wage rate, or improvements in the technology of producing healthy time, all increase the optimal length of life. On the other hand, optimal longevity is decreasing in the depreciation and interest rate. From a technical point of view, the paper illustrates that a discrete time equivalent to the transversality condition for optimal longevity employed in continuous optimal control models does not exist. Copyright 2000 John Wiley & Sons, Ltd.
Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2006-01-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
NASA Astrophysics Data System (ADS)
Shah, Rahul H.
Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.
TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, H; Gordon, J; Chetty, I
2014-06-15
Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less
An optimal repartitioning decision policy
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Reynolds, P. F., Jr.
1986-01-01
A central problem to parallel processing is the determination of an effective partitioning of workload to processors. The effectiveness of any given partition is dependent on the stochastic nature of the workload. The problem of determining when and if the stochastic behavior of the workload has changed enough to warrant the calculation of a new partition is treated. The problem is modeled as a Markov decision process, and an optimal decision policy is derived. Quantification of this policy is usually intractable. A heuristic policy which performs nearly optimally is investigated empirically. The results suggest that the detection of change is the predominant issue in this problem.
Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.
2016-01-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230
Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C
2015-03-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Stephen R., E-mail: stephen.thompson@sesiahs.health.nsw.gov.au; Department of Radiation Oncology, Prince of Wales Hospital, Sydney; University of New South Wales, Sydney
Purpose: We aimed to estimate the optimal proportion of all gynecological cancers that should be treated with brachytherapy (BT)-the optimal brachytherapy utilization rate (BTU)-to compare this with actual gynecological BTU and to assess the effects of nonmedical factors on access to BT. Methods and Materials: The previously constructed inter/multinational guideline-based peer-reviewed models of optimal BTU for cancers of the uterine cervix, uterine corpus, and vagina were combined to estimate optimal BTU for all gynecological cancers. The robustness of the model was tested by univariate and multivariate sensitivity analyses. The resulting model was applied to New South Wales (NSW), the Unitedmore » States, and Western Europe. Actual BTU was determined for NSW by a retrospective patterns-of-care study of BT; for Western Europe from published reports; and for the United States from Surveillance, Epidemiology, and End Results data. Differences between optimal and actual BTU were assessed. The effect of nonmedical factors on access to BT in NSW were analyzed. Results: Gynecological BTU was as follows: NSW 28% optimal (95% confidence interval [CI] 26%-33%) compared with 14% actual; United States 30% optimal (95% CI 26%-34%) and 10% actual; and Western Europe 27% optimal (95% CI 25%-32%) and 16% actual. On multivariate analysis, NSW patients were more likely to undergo gynecological BT if residing in Area Health Service equipped with BT (odds ratio 1.76, P=.008) and if residing in socioeconomically disadvantaged postcodes (odds ratio 1.12, P=.05), but remoteness of residence was not significant. Conclusions: Gynecological BT is underutilized in NSW, Western Europe, and the United States given evidence-based guidelines. Access to BT equipment in NSW was significantly associated with higher utilization rates. Causes of underutilization elsewhere were undetermined. Our model of optimal BTU can be used as a quality assurance tool, providing an evidence-based benchmark against which actual patterns of practice can be measured. It can also be used to assist in determining the adequacy of BT resource allocation.« less
NASA Astrophysics Data System (ADS)
Mansor, S. B.; Pormanafi, S.; Mahmud, A. R. B.; Pirasteh, S.
2012-08-01
In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles.
Planning a Target Renewable Portfolio using Atmospheric Modeling and Stochastic Optimization
NASA Astrophysics Data System (ADS)
Hart, E.; Jacobson, M. Z.
2009-12-01
A number of organizations have suggested that an 80% reduction in carbon emissions by 2050 is a necessary step to mitigate climate change and that decarbonization of the electricity sector is a crucial component of any strategy to meet this target. Integration of large renewable and intermittent generators poses many new problems in power system planning. In this study, we attempt to determine an optimal portfolio of renewable resources to meet best the fluctuating California load while also meeting an 80% carbon emissions reduction requirement. A stochastic optimization scheme is proposed that is based on a simplified model of the California electricity grid. In this single-busbar power system model, the load is met with generation from wind, solar thermal, photovoltaic, hydroelectric, geothermal, and natural gas plants. Wind speeds and insolation are calculated using GATOR-GCMOM, a global-through-urban climate-weather-air pollution model. Fields were produced for California and Nevada at 21km SN by 14 km WE spatial resolution every 15 minutes for the year 2006. Load data for 2006 were obtained from the California ISO OASIS database. Maximum installed capacities for wind and solar thermal generation were determined using a GIS analysis of potential development sites throughout the state. The stochastic optimization scheme requires that power balance be achieved in a number of meteorological and load scenarios that deviate from the forecasted (or modeled) data. By adjusting the error distributions of the forecasts, the model describes how improvements in wind speed and insolation forecasting may affect the optimal renewable portfolio. Using a simple model, we describe the diversity, size, and sensitivities of a renewable portfolio that is best suited to the resources and needs of California and that contributes significantly to reduction of the state’s carbon emissions.
Hogiri, Tomoharu; Tamashima, Hiroshi; Nishizawa, Akitoshi; Okamoto, Masahiro
2018-02-01
To optimize monoclonal antibody (mAb) production in Chinese hamster ovary cell cultures, culture pH should be temporally controlled with high resolution. In this study, we propose a new pH-dependent dynamic model represented by simultaneous differential equations including a minimum of six system component, depending on pH value. All kinetic parameters in the dynamic model were estimated using an evolutionary numerical optimization (real-coded genetic algorithm) method based on experimental time-course data obtained at different pH values ranging from 6.6 to 7.2. We determined an optimal pH-shift schedule theoretically. We validated this optimal pH-shift schedule experimentally and mAb production increased by approximately 40% with this schedule. Throughout this study, it was suggested that the culture pH-shift optimization strategy using a pH-dependent dynamic model is suitable to optimize any pH-shift schedule for CHO cell lines used in mAb production projects. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Simulation based optimized beam velocity in additive manufacturing
NASA Astrophysics Data System (ADS)
Vignat, Frédéric; Béraud, Nicolas; Villeneuve, François
2017-08-01
Manufacturing good parts with additive technologies rely on melt pool dimension and temperature and are controlled by manufacturing strategies often decided on machine side. Strategies are built on beam path and variable energy input. Beam path are often a mix of contour and hatching strategies filling the contours at each slice. Energy input depend on beam intensity and speed and is determined from simple thermal models to control melt pool dimensions and temperature and ensure porosity free material. These models take into account variation in thermal environment such as overhanging surfaces or back and forth hatching path. However not all the situations are correctly handled and precision is limited. This paper proposes new method to determine energy input from full built chamber 3D thermal simulation. Using the results of the simulation, energy is modified to keep melt pool temperature in a predetermined range. The paper present first an experimental method to determine the optimal range of temperature. In a second part the method to optimize the beam speed from the simulation results is presented. Finally, the optimized beam path is tested in the EBM machine and built part are compared with part built with ordinary beam path.
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-01-01
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-06-08
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.
Lightweight structure design for supporting plate of primary mirror
NASA Astrophysics Data System (ADS)
Wang, Xiao; Wang, Wei; Liu, Bei; Qu, Yan Jun; Li, Xu Peng
2017-10-01
A topological optimization design for the lightweight technology of supporting plate of the primary mirror is presented in this paper. The supporting plate of the primary mirror is topologically optimized under the condition of determined shape, loads and environment. And the optimal structure is obtained. The diameter of the primary mirror in this paper is 450mm, and the material is SiC1 . It is better to select SiC/Al as the supporting material. Six points of axial relative displacement can be used as constraints in optimization2 . Establishing the supporting plate model and setting up the model parameters. After analyzing the force of the main mirror on the supporting plate, the model is applied with force and constraints. Modal analysis and static analysis of supporting plates are calculated. The continuum structure topological optimization mathematical model is created with the variable-density method. The maximum deformation of the surface of supporting plate under the gravity of the mirror and the first model frequency are assigned to response variable, and the entire volume of supporting structure is converted to object function. The structures before and after optimization are analyzed using the finite element method. Results show that the optimized fundamental frequency increases 29.85Hz and has a less displacement compared with the traditional structure.
Finding the optimal lengths for three branches at a junction.
Woldenberg, M J; Horsfield, K
1983-09-21
This paper presents an exact analytical solution to the problem of locating the junction point between three branches so that the sum of the total costs of the branches is minimized. When the cost per unit length of each branch is known the angles between each pair of branches can be deduced following reasoning first introduced to biology by Murray. Assuming the outer ends of each branch are fixed, the location of the junction and the length of each branch are then deduced using plane geometry and trigonometry. The model has applications in determining the optimal cost of a branch or branches at a junction. Comparing the optimal to the actual cost of a junction is a new way to compare cost models for goodness of fit to actual junction geometry. It is an unambiguous measure and is superior to comparing observed and optimal angles between each daughter and the parent branch. We present data for 199 junctions in the pulmonary arteries of two human lungs. For the branches at each junction we calculated the best fitting value of x from the relationship that flow alpha (radius)x. We found that the value of x determined whether a junction was best fitted by a surface, volume, drag or power minimization model. While economy of explanation casts doubt that four models operate simultaneously, we found that optimality may still operate, since the angle to the major daughter is less than the angle to the minor daughter. Perhaps optimality combined with a space filling branching pattern governs the branching geometry of the pulmonary artery.
Optimizing nursing human resource planning in British Columbia.
Lavieri, Mariel S; Puterman, Martin L
2009-06-01
This paper describes a linear programming hierarchical planning model that determines the optimal number of nurses to train, promote to management and recruit over a 20 year planning horizon to achieve specified workforce levels. Age dynamics and attrition rates of the nursing workforce are key model components. The model was developed to help policy makers plan a sustainable nursing workforce for British Columbia, Canada. An easy to use interface and considerable flexibility makes it ideal for scenario and "What-If?" analyses.
Optimization of investment portfolio weight of stocks affected by market index
NASA Astrophysics Data System (ADS)
Azizah, E.; Rusyaman, E.; Supian, S.
2017-01-01
Stock price assessment, selection of optimum combination, and measure the risk of a portfolio investment is one important issue for investors. In this paper single index model used for the assessment of the stock price, and formulation optimization model developed using Lagrange multiplier technique to determine the proportion of assets to be invested. The level of risk is estimated by using variance. These models are used to analyse the stock price data Lippo Bank and Bumi Putera.
Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.
NASA Astrophysics Data System (ADS)
Velichkin, Vladimir A.; Zavyalov, Vladimir A.
2018-03-01
This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.
NASA Astrophysics Data System (ADS)
Pinson, Robin Marie
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed to find the global minimum provided one exists. In addition, an outer optimization loop using Brent's method determines the optimal flight time corresponding to the minimum propellant usage over all flight times. Inclusion of additional trajectory constraints, solely vertical motion near the landing site and glide slope, were evaluated. Through a theoretical proof involving the Minimum Principle from Optimal Control Theory and the Karush-Kuhn-Tucker conditions it was shown that the relaxed problem is identical to the original problem at the minimum point. Therefore, the optimal solution of the relaxed problem is an optimal solution of the original problem, referred to as lossless convexification. A key finding is that this holds for all levels of gravity model fidelity. The designed thrust magnitude profiles were the bang-bang predicted by Optimal Control Theory. The first high fidelity gravity model employed was the 2x2 spherical harmonics model assuming a perfect triaxial ellipsoid and placement of the coordinate frame at the asteroid's center of mass and aligned with the semi-major axes. The spherical harmonics model is not valid inside the Brillouin sphere and this becomes relevant for irregularly shaped asteroids. Then, a higher fidelity model was implemented combining the 4x4 spherical harmonics gravity model with the interior spherical Bessel gravity model. All gravitational terms in the equations of motion are evaluated with the position vector from the previous iteration, creating the successive solution method. Methodology success was shown by applying the algorithm to three triaxial ellipsoidal asteroids with four different rotation speeds using the 2x2 gravity model. Finally, the algorithm was tested using the irregularly shaped asteroid, Castalia.
Comparison of DNQ/novolac resists for e-beam exposure
NASA Astrophysics Data System (ADS)
Fedynyshyn, Theodore H.; Doran, Scott P.; Lind, Michele L.; Lyszczarz, Theodore M.; DiNatale, William F.; Lennon, Donna; Sauer, Charles A.; Meute, Jeff
1999-12-01
We have surveyed the commercial resist market with the dual purpose of identifying diazoquinone/novolac based resists that have potential for use as e-beam mask making resists and baselining these resists for comparison against future mask making resist candidates. For completeness, this survey would require that each resist be compared with an optimized developer and development process. To accomplish this task in an acceptable time period, e-beam lithography modeling was employed to quickly identify the resist and developer combinations that lead to superior resist performance. We describe the verification of a method to quickly screen commercial i-line resists with different developers, by determining modeling parameters for i-line resists from e-beam exposures, modeling the resist performance, and comparing predicted performance versus actual performance. We determined the lithographic performance of several DNQ/novolac resists whose modeled performance suggests that sensitivities of less than 40 (mu) C/cm2 coupled with less than 10-nm CD change per percent change in dose are possible for target 600-nm features. This was accomplished by performing a series of statistically designed experiments on the leading resists candidates to optimize processing variables, followed by comparing experimentally determined resist sensitivities, latitudes, and profiles of the DNQ/novolac resists a their optimized process.
Xu, Liyuan; Gao, Haoshi; Li, Liangxing; Li, Yinnong; Wang, Liuyun; Gao, Chongkai; Li, Ning
2016-12-23
The effective permeability coefficient is of theoretical and practical importance in evaluation of the bioavailability of drug candidates. However, most methods currently used to measure this coefficient are expensive and time-consuming. In this paper, we addressed these problems by proposing a new measurement method which is based on the microemulsion liquid chromatography. First, the parallel artificial membrane permeability assays model was used to determine the effective permeability of drug so that quantitative retention-activity relationships could be established, which were used to optimize the microemulsion liquid chromatography. The most effective microemulsion system used a mobile phase of 6.0% (w/w) Brij35, 6.6% (w/w) butanol, 0.8% (w/w) octanol, and 86.6% (w/w) phosphate buffer (pH 7.4). Next, support vector machine and back-propagation neural networks are employed to develop a quantitative retention-activity relationships model associated with the optimal microemulsion system, and used to improve the prediction ability. Finally, an adequate correlation between experimental value and predicted value is computed to verify the performance of the optimal model. The results indicate that the microemulsion liquid chromatography can serve as a possible alternative to the PAMPA method for determination of high-throughput permeability and simulation of biological processes. Copyright © 2016. Published by Elsevier B.V.
Oil Formation Volume Factor Determination Through a Fused Intelligence
NASA Astrophysics Data System (ADS)
Gholami, Amin
2016-12-01
Volume change of oil between reservoir condition and standard surface condition is called oil formation volume factor (FVF), which is very time, cost and labor intensive to determine. This study proposes an accurate, rapid and cost-effective approach for determining FVF from reservoir temperature, dissolved gas oil ratio, and specific gravity of both oil and dissolved gas. Firstly, structural risk minimization (SRM) principle of support vector regression (SVR) was employed to construct a robust model for estimating FVF from the aforementioned inputs. Subsequently, an alternating conditional expectation (ACE) was used for approximating optimal transformations of input/output data to a higher correlated data and consequently developing a sophisticated model between transformed data. Eventually, a committee machine with SVR and ACE was constructed through the use of hybrid genetic algorithm-pattern search (GA-PS). Committee machine integrates ACE and SVR models in an optimal linear combination such that makes benefit of both methods. A group of 342 data points was used for model development and a group of 219 data points was used for blind testing the constructed model. Results indicated that the committee machine performed better than individual models.
NASA Astrophysics Data System (ADS)
Chen, Hua-cai; Chen, Xing-dan; Lu, Yong-jun; Cao, Zhi-qiang
2006-01-01
Near infrared (NIR) reflectance spectroscopy was used to develop a fast determination method for total ginsenosides in Ginseng (Panax Ginseng) powder. The spectra were analyzed with multiplicative signal correction (MSC) correlation method. The best correlative spectra region with the total ginsenosides content was 1660 nm~1880 nm and 2230nm~2380 nm. The NIR calibration models of ginsenosides were built with multiple linear regression (MLR), principle component regression (PCR) and partial least squares (PLS) regression respectively. The results showed that the calibration model built with PLS combined with MSC and the optimal spectrum region was the best one. The correlation coefficient and the root mean square error of correction validation (RMSEC) of the best calibration model were 0.98 and 0.15% respectively. The optimal spectrum region for calibration was 1204nm~2014nm. The result suggested that using NIR to rapidly determinate the total ginsenosides content in ginseng powder were feasible.
Merlé, Y; Mentré, F
1995-02-01
In this paper 3 criteria to design experiments for Bayesian estimation of the parameters of nonlinear models with respect to their parameters, when a prior distribution is available, are presented: the determinant of the Bayesian information matrix, the determinant of the pre-posterior covariance matrix, and the expected information provided by an experiment. A procedure to simplify the computation of these criteria is proposed in the case of continuous prior distributions and is compared with the criterion obtained from a linearization of the model about the mean of the prior distribution for the parameters. This procedure is applied to two models commonly encountered in the area of pharmacokinetics and pharmacodynamics: the one-compartment open model with bolus intravenous single-dose injection and the Emax model. They both involve two parameters. Additive as well as multiplicative gaussian measurement errors are considered with normal prior distributions. Various combinations of the variances of the prior distribution and of the measurement error are studied. Our attention is restricted to designs with limited numbers of measurements (1 or 2 measurements). This situation often occurs in practice when Bayesian estimation is performed. The optimal Bayesian designs that result vary with the variances of the parameter distribution and with the measurement error. The two-point optimal designs sometimes differ from the D-optimal designs for the mean of the prior distribution and may consist of replicating measurements. For the studied cases, the determinant of the Bayesian information matrix and its linearized form lead to the same optimal designs. In some cases, the pre-posterior covariance matrix can be far from its lower bound, namely, the inverse of the Bayesian information matrix, especially for the Emax model and a multiplicative measurement error. The expected information provided by the experiment and the determinant of the pre-posterior covariance matrix generally lead to the same designs except for the Emax model and the multiplicative measurement error. Results show that these criteria can be easily computed and that they could be incorporated in modules for designing experiments.
Biologically Inspired, Anisoptropic Flexible Wing for Optimal Flapping Flight
2013-01-31
Anisotropic Flexible Wing for Optimal Flapping Flight FA9550-07-1-0547 Sb. GRANT NUMBER Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER...anisotropic structural flexibility ; c) Conducted coordinated experimental and computational modeling to determine the roles of aerodynamic loading, wing inertia...and structural flexibility and elasticity; and d) Developed surrogate tools for flapping wing MA V design and optimization. Detailed research
Ozdemir, Utkan; Ozbay, Bilge; Ozbay, Ismail; Veli, Sevil
2014-09-01
In this work, Taguchi L32 experimental design was applied to optimize biosorption of Cu(2+) ions by an easily available biosorbent, Spaghnum moss. With this aim, batch biosorption tests were performed to achieve targeted experimental design with five factors (concentration, pH, biosorbent dosage, temperature and agitation time) at two different levels. Optimal experimental conditions were determined by calculated signal-to-noise ratios. "Higher is better" approach was followed to calculate signal-to-noise ratios as it was aimed to obtain high metal removal efficiencies. The impact ratios of factors were determined by the model. Within the study, Cu(2+) biosorption efficiencies were also predicted by using Taguchi method. Results of the model showed that experimental and predicted values were close to each other demonstrating the success of Taguchi approach. Furthermore, thermodynamic, isotherm and kinetic studies were performed to explain the biosorption mechanism. Calculated thermodynamic parameters were in good accordance with the results of Taguchi model. Copyright © 2014 Elsevier Inc. All rights reserved.
Jamsen, Kris M; Duffull, Stephen B; Tarning, Joel; Lindegardh, Niklas; White, Nicholas J; Simpson, Julie A
2012-07-11
Artemisinin-based combination therapy (ACT) is currently recommended as first-line treatment for uncomplicated malaria, but of concern, it has been observed that the effectiveness of the main artemisinin derivative, artesunate, has been diminished due to parasite resistance. This reduction in effect highlights the importance of the partner drugs in ACT and provides motivation to gain more knowledge of their pharmacokinetic (PK) properties via population PK studies. Optimal design methodology has been developed for population PK studies, which analytically determines a sampling schedule that is clinically feasible and yields precise estimation of model parameters. In this work, optimal design methodology was used to determine sampling designs for typical future population PK studies of the partner drugs (mefloquine, lumefantrine, piperaquine and amodiaquine) co-administered with artemisinin derivatives. The optimal designs were determined using freely available software and were based on structural PK models from the literature and the key specifications of 100 patients with five samples per patient, with one sample taken on the seventh day of treatment. The derived optimal designs were then evaluated via a simulation-estimation procedure. For all partner drugs, designs consisting of two sampling schedules (50 patients per schedule) with five samples per patient resulted in acceptable precision of the model parameter estimates. The sampling schedules proposed in this paper should be considered in future population pharmacokinetic studies where intensive sampling over many days or weeks of follow-up is not possible due to either ethical, logistic or economical reasons.
NASA Astrophysics Data System (ADS)
Teoh, Lay Eng; Khoo, Hooi Ling
2013-09-01
This study deals with two major aspects of airlines, i.e. supply and demand management. The aspect of supply focuses on the mathematical formulation of an optimal fleet management model to maximize operational profit of the airlines while the aspect of demand focuses on the incorporation of mode choice modeling as parts of the developed model. The proposed methodology is outlined in two-stage, i.e. Fuzzy Analytic Hierarchy Process is first adopted to capture mode choice modeling in order to quantify the probability of probable phenomena (for aircraft acquisition/leasing decision). Then, an optimization model is developed as a probabilistic dynamic programming model to determine the optimal number and types of aircraft to be acquired and/or leased in order to meet stochastic demand during the planning horizon. The findings of an illustrative case study show that the proposed methodology is viable. The results demonstrate that the incorporation of mode choice modeling could affect the operational profit and fleet management decision of the airlines at varying degrees.
NASA Astrophysics Data System (ADS)
Masternak, Tadeusz J.
This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.
NASA Technical Reports Server (NTRS)
Pavarini, C.
1974-01-01
Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.
NASA Technical Reports Server (NTRS)
Unal, Resit
1999-01-01
Multdisciplinary design optimization (MDO) is an important step in the design and evaluation of launch vehicles, since it has a significant impact on performance and lifecycle cost. The objective in MDO is to search the design space to determine the values of design parameters that optimize the performance characteristics subject to system constraints. Vehicle Analysis Branch (VAB) at NASA Langley Research Center has computerized analysis tools in many of the disciplines required for the design and analysis of launch vehicles. Vehicle performance characteristics can be determined by the use of these computerized analysis tools. The next step is to optimize the system performance characteristics subject to multidisciplinary constraints. However, most of the complex sizing and performance evaluation codes used for launch vehicle design are stand-alone tools, operated by disciplinary experts. They are, in general, difficult to integrate and use directly for MDO. An alternative has been to utilize response surface methodology (RSM) to obtain polynomial models that approximate the functional relationships between performance characteristics and design variables. These approximation models, called response surface models, are then used to integrate the disciplines using mathematical programming methods for efficient system level design analysis, MDO and fast sensitivity simulations. A second-order response surface model of the form given has been commonly used in RSM since in many cases it can provide an adequate approximation especially if the region of interest is sufficiently limited.
Optimal design and use of retry in fault tolerant real-time computer systems
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
Optimizing Classroom Acoustics Using Computer Model Studies.
ERIC Educational Resources Information Center
Reich, Rebecca; Bradley, John
1998-01-01
Investigates conditions relating to the maximum useful-to-detrimental sound ratios present in classrooms and determining the optimum conditions for speech intelligibility. Reveals that speech intelligibility is more strongly influenced by ambient noise levels and that the optimal location for sound absorbing material is on a classroom's upper…
Experimental test of an online ion-optics optimizer
NASA Astrophysics Data System (ADS)
Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.
2018-07-01
A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.
NASA Astrophysics Data System (ADS)
Alizadeh, Mohammad Reza; Nikoo, Mohammad Reza; Rakhshandehroo, Gholam Reza
2017-08-01
Sustainable management of water resources necessitates close attention to social, economic and environmental aspects such as water quality and quantity concerns and potential conflicts. This study presents a new fuzzy-based multi-objective compromise methodology to determine the socio-optimal and sustainable policies for hydro-environmental management of groundwater resources, which simultaneously considers the conflicts and negotiation of involved stakeholders, uncertainties in decision makers' preferences, existing uncertainties in the groundwater parameters and groundwater quality and quantity issues. The fuzzy multi-objective simulation-optimization model is developed based on qualitative and quantitative groundwater simulation model (MODFLOW and MT3D), multi-objective optimization model (NSGA-II), Monte Carlo analysis and Fuzzy Transformation Method (FTM). Best compromise solutions (best management policies) on trade-off curves are determined using four different Fuzzy Social Choice (FSC) methods. Finally, a unanimity fallback bargaining method is utilized to suggest the most preferred FSC method. Kavar-Maharloo aquifer system in Fars, Iran, as a typical multi-stakeholder multi-objective real-world problem is considered to verify the proposed methodology. Results showed an effective performance of the framework for determining the most sustainable allocation policy in groundwater resource management.
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Determination of effective thoracic mass.
DOT National Transportation Integrated Search
1996-02-01
Effective thoracic mass is a critical parameter in specifying mathematical and mechanical models (such as crash dummies) of humans exposed to impact conditions. A method is developed using a numerical optimizer to determine effective thoracic mass (a...
Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon
2015-06-01
Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
An Optimization Model for the Selection of Bus-Only Lanes in a City.
Chen, Qun
2015-01-01
The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model.
Optimization techniques for integrating spatial data
Herzfeld, U.C.; Merriam, D.F.
1995-01-01
Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.
NASA Technical Reports Server (NTRS)
Seldner, K.
1977-01-01
An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.
Riches, S F; Payne, G S; Morgan, V A; Dearnaley, D; Morgan, S; Partridge, M; Livni, N; Ogden, C; deSouza, N M
2015-05-01
The objectives are determine the optimal combination of MR parameters for discriminating tumour within the prostate using linear discriminant analysis (LDA) and to compare model accuracy with that of an experienced radiologist. Multiparameter MRIs in 24 patients before prostatectomy were acquired. Tumour outlines from whole-mount histology, T2-defined peripheral zone (PZ), and central gland (CG) were superimposed onto slice-matched parametric maps. T2, Apparent Diffusion Coefficient, initial area under the gadolinium curve, vascular parameters (K(trans),Kep,Ve), and (choline+polyamines+creatine)/citrate were compared between tumour and non-tumour tissues. Receiver operating characteristic (ROC) curves determined sensitivity and specificity at spectroscopic voxel resolution and per lesion, and LDA determined the optimal multiparametric model for identifying tumours. Accuracy was compared with an expert observer. Tumours were significantly different from PZ and CG for all parameters (all p < 0.001). Area under the ROC curve for discriminating tumour from non-tumour was significantly greater (p < 0.001) for the multiparametric model than for individual parameters; at 90 % specificity, sensitivity was 41 % (MRSI voxel resolution) and 59 % per lesion. At this specificity, an expert observer achieved 28 % and 49 % sensitivity, respectively. The model was more accurate when parameters from all techniques were included and performed better than an expert observer evaluating these data. • The combined model increases diagnostic accuracy in prostate cancer compared with individual parameters • The optimal combined model includes parameters from diffusion, spectroscopy, perfusion, and anatominal MRI • The computed model improves tumour detection compared to an expert viewing parametric maps.
Hydrologic Process-oriented Optimization of Electrical Resistivity Tomography
NASA Astrophysics Data System (ADS)
Hinnell, A.; Bechtold, M.; Ferre, T. A.; van der Kruk, J.
2010-12-01
Electrical resistivity tomography (ERT) is commonly used in hydrologic investigations. Advances in joint and coupled hydrogeophysical inversion have enhanced the quantitative use of ERT to construct and condition hydrologic models (i.e. identify hydrologic structure and estimate hydrologic parameters). However the selection of which electrical resistivity data to collect and use is often determined by a combination of data requirements for geophysical analysis, intuition on the part of the hydrogeophysicist and logistical constraints of the laboratory or field site. One of the advantages of coupled hydrogeophysical inversion is the direct link between the hydrologic model and the individual geophysical data used to condition the model. That is, there is no requirement to collect geophysical data suitable for independent geophysical inversion. The geophysical measurements collected can be optimized for estimation of hydrologic model parameters rather than to develop a geophysical model. Using a synthetic model of drip irrigation we evaluate the value of individual resistivity measurements to describe the soil hydraulic properties and then use this information to build a data set optimized for characterizing hydrologic processes. We then compare the information content in the optimized data set with the information content in a data set optimized using a Jacobian sensitivity analysis.
Filgueira, Ramon; Grant, Jon; Strand, Øivind
2014-06-01
Shellfish carrying capacity is determined by the interaction of a cultured species with its ecosystem, which is strongly influenced by hydrodynamics. Water circulation controls the exchange of matter between farms and the adjacent areas, which in turn establishes the nutrient supply that supports phytoplankton populations. The complexity of water circulation makes necessary the use of hydrodynamic models with detailed spatial resolution in carrying capacity estimations. This detailed spatial resolution also allows for the study of processes that depend on specific spatial arrangements, e.g., the most suitable location to place farms, which is crucial for marine spatial planning, and consequently for decision support systems. In the present study, a fully spatial physical-biogeochemical model has been combined with scenario building and optimization techniques as a proof of concept of the use of ecosystem modeling as an objective tool to inform marine spatial planning. The object of this exercise was to generate objective knowledge based on an ecosystem approach to establish new mussel aquaculture areas in a Norwegian fjord. Scenario building was used to determine the best location of a pump that can be used to bring nutrient-rich deep waters to the euphotic layer, increasing primary production, and consequently, carrying capacity for mussel cultivation. In addition, an optimization tool, parameter estimation (PEST), was applied to the optimal location and mussel standing stock biomass that maximize production, according to a preestablished carrying capacity criterion. Optimization tools allow us to make rational and transparent decisions to solve a well-defined question, decisions that are essential for policy makers. The outcomes of combining ecosystem models with scenario building and optimization facilitate planning based on an ecosystem approach, highlighting the capabilities of ecosystem modeling as a tool for marine spatial planning.
Optimization of multi-environment trials for genomic selection based on crop models.
Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J
2017-08-01
We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.
Functional and Structural Optimality in Plant Growth: A Crop Modelling Case Study
NASA Astrophysics Data System (ADS)
Caldararu, S.; Purves, D. W.; Smith, M. J.
2014-12-01
Simple mechanistic models of vegetation processes are essential both to our understanding of plant behaviour and to our ability to predict future changes in vegetation. One concept that can take us closer to such models is that of plant optimality, the hypothesis that plants aim to achieve an optimal state. Conceptually, plant optimality can be either structural or functional optimality. A structural constraint would mean that plants aim to achieve a certain structural characteristic such as an allometric relationship or nutrient content that allows optimal function. A functional condition refers to plants achieving optimal functionality, in most cases by maximising carbon gain. Functional optimality conditions are applied on shorter time scales and lead to higher plasticity, making plants more adaptable to changes in their environment. In contrast, structural constraints are optimal given the specific environmental conditions that plants are adapted to and offer less flexibility. We exemplify these concepts using a simple model of crop growth. The model represents annual cycles of growth from sowing date to harvest, including both vegetative and reproductive growth and phenology. Structural constraints to growth are represented as an optimal C:N ratio in all plant organs, which drives allocation throughout the vegetative growing stage. Reproductive phenology - i.e. the onset of flowering and grain filling - is determined by a functional optimality condition in the form of maximising final seed mass, so that vegetative growth stops when the plant reaches maximum nitrogen or carbon uptake. We investigate the plants' response to variations in environmental conditions within these two optimality constraints and show that final yield is most affected by changes during vegetative growth which affect the structural constraint.
Determination of full piezoelectric complex parameters using gradient-based optimization algorithm
NASA Astrophysics Data System (ADS)
Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.
2016-02-01
At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.
Optimizing Chemical Reactions with Deep Reinforcement Learning.
Zhou, Zhenpeng; Li, Xiaocheng; Zare, Richard N
2017-12-27
Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability.
NASA Astrophysics Data System (ADS)
Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu; Lee, Hee-Hyol
This paper deals with the building of the reusable reverse logistics model considering the decision of the backorder or the next arrival of goods. The optimization method to minimize the transportation cost and to minimize the volume of the backorder or the next arrival of goods occurred by the Just in Time delivery of the final delivery stage between the manufacturer and the processing center is proposed. Through the optimization algorithms using the priority-based genetic algorithm and the hybrid genetic algorithm, the sub-optimal delivery routes are determined. Based on the case study of a distilling and sale company in Busan in Korea, the new model of the reusable reverse logistics of empty bottles is built and the effectiveness of the proposed method is verified.
Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun
2012-01-01
How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.
Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun
2012-01-01
How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961
Ledzewicz, Urszula; Schättler, Heinz
2017-08-10
Metronomic chemotherapy refers to the frequent administration of chemotherapy at relatively low, minimally toxic doses without prolonged treatment interruptions. Different from conventional or maximum-tolerated-dose chemotherapy which aims at an eradication of all malignant cells, in a metronomic dosing the goal often lies in the long-term management of the disease when eradication proves elusive. Mathematical modeling and subsequent analysis (theoretical as well as numerical) have become an increasingly more valuable tool (in silico) both for determining conditions under which specific treatment strategies should be preferred and for numerically optimizing treatment regimens. While elaborate, computationally-driven patient specific schemes that would optimize the timing and drug dose levels are still a part of the future, such procedures may become instrumental in making chemotherapy effective in situations where it currently fails. Ideally, mathematical modeling and analysis will develop into an additional decision making tool in the complicated process that is the determination of efficient chemotherapy regimens. In this article, we review some of the results that have been obtained about metronomic chemotherapy from mathematical models and what they infer about the structure of optimal treatment regimens. Copyright © 2017 Elsevier B.V. All rights reserved.
Liu, Xue-song; Sun, Fen-fang; Jin, Ye; Wu, Yong-jiang; Gu, Zhi-xin; Zhu, Li; Yan, Dong-lan
2015-12-01
A novel method was developed for the rapid determination of multi-indicators in corni fructus by means of near infrared (NIR) spectroscopy. Particle swarm optimization (PSO) based least squares support vector machine was investigated to increase the levels of quality control. The calibration models of moisture, extractum, morroniside and loganin were established using the PSO-LS-SVM algorithm. The performance of PSO-LS-SVM models was compared with partial least squares regression (PLSR) and back propagation artificial neural network (BP-ANN). The calibration and validation results of PSO-LS-SVM were superior to both PLS and BP-ANN. For PSO-LS-SVM models, the correlation coefficients (r) of calibrations were all above 0.942. The optimal prediction results were also achieved by PSO-LS-SVM models with the RMSEP (root mean square error of prediction) and RSEP (relative standard errors of prediction) less than 1.176 and 15.5% respectively. The results suggest that PSO-LS-SVM algorithm has a good model performance and high prediction accuracy. NIR has a potential value for rapid determination of multi-indicators in Corni Fructus.
Impact of topographic mask models on scanner matching solutions
NASA Astrophysics Data System (ADS)
Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.
2014-03-01
Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.
Adaptive model-based control systems and methods for controlling a gas turbine
NASA Technical Reports Server (NTRS)
Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)
2004-01-01
Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).
Chande, Ruchi D; Wayne, Jennifer S
2017-09-01
Computational models of diarthrodial joints serve to inform the biomechanical function of these structures, and as such, must be supplied appropriate inputs for performance that is representative of actual joint function. Inputs for these models are sourced from both imaging modalities as well as literature. The latter is often the source of mechanical properties for soft tissues, like ligament stiffnesses; however, such data are not always available for all the soft tissues nor is it known for patient-specific work. In the current research, a method to improve the ligament stiffness definition for a computational foot/ankle model was sought with the greater goal of improving the predictive ability of the computational model. Specifically, the stiffness values were optimized using artificial neural networks (ANNs); both feedforward and radial basis function networks (RBFNs) were considered. Optimal networks of each type were determined and subsequently used to predict stiffnesses for the foot/ankle model. Ultimately, the predicted stiffnesses were considered reasonable and resulted in enhanced performance of the computational model, suggesting that artificial neural networks can be used to optimize stiffness inputs.
Liquid disinfection using power impulse laser
NASA Astrophysics Data System (ADS)
Gribin, S.; Assaoul, Viktor; Markova, Elena; Gromova, Ludmila P.; Spesivtsev, Boris; Bazanov, V.
1996-05-01
The presented method is based on the bactericidal effect of micro-blast induced by various sources (laser breakdown, electrohydraulic effect...). Using elaborated conception of physical phenomena providing liquid disinfection it is possible to determine optimal conditions of water treatment. The problem of optimization is solved using methods of mathematical modeling and special experiments.
Optimization Techniques for College Financial Aid Managers
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.
2010-01-01
In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…
Simulating and Optimizing Preparative Protein Chromatography with ChromX
ERIC Educational Resources Information Center
Hahn, Tobias; Huuk, Thiemo; Heuveline, Vincent; Hubbuch, Ju¨rgen
2015-01-01
Industrial purification of biomolecules is commonly based on a sequence of chromatographic processes, which are adapted slightly to new target components, as the time to market is crucial. To improve time and material efficiency, modeling is increasingly used to determine optimal operating conditions, thus providing new challenges for current and…
Liquid disinfection using power impulse laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gribin, S.; Assaoul, V.; Markova, E.
1996-12-31
The presented method is based on the bactericidal effect of micro-blast induced by various sources (laser breakdown, electrohydraulic effect ... ). Using elaborated conception of physical phenomena providing liquid disinfection it is possible to determine optimal conditions of water treatment. The problem of optimization is solved using methods of mathematical modeling and special experiments.
Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar
2014-11-01
Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process. © IMechE 2014.
NASA Astrophysics Data System (ADS)
Raei, Ehsan; Nikoo, Mohammad Reza; Pourshahabi, Shokoufeh
2017-08-01
In the present study, a BIOPLUME III simulation model is coupled with a non-dominating sorting genetic algorithm (NSGA-II)-based model for optimal design of in situ groundwater bioremediation system, considering preferences of stakeholders. Ministry of Energy (MOE), Department of Environment (DOE), and National Disaster Management Organization (NDMO) are three stakeholders in the groundwater bioremediation problem in Iran. Based on the preferences of these stakeholders, the multi-objective optimization model tries to minimize: (1) cost; (2) sum of contaminant concentrations that violate standard; (3) contaminant plume fragmentation. The NSGA-II multi-objective optimization method gives Pareto-optimal solutions. A compromised solution is determined using fallback bargaining with impasse to achieve a consensus among the stakeholders. In this study, two different approaches are investigated and compared based on two different domains for locations of injection and extraction wells. At the first approach, a limited number of predefined locations is considered according to previous similar studies. At the second approach, all possible points in study area are investigated to find optimal locations, arrangement, and flow rate of injection and extraction wells. Involvement of the stakeholders, investigating all possible points instead of a limited number of locations for wells, and minimizing the contaminant plume fragmentation during bioremediation are new innovations in this research. Besides, the simulation period is divided into smaller time intervals for more efficient optimization. Image processing toolbox in MATLAB® software is utilized for calculation of the third objective function. In comparison with previous studies, cost is reduced using the proposed methodology. Dispersion of the contaminant plume is reduced in both presented approaches using the third objective function. Considering all possible points in the study area for determining the optimal locations of the wells in the second approach leads to more desirable results, i.e. decreasing the contaminant concentrations to a standard level and 20% to 40% cost reduction.
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
Rosewater, David; Ferreira, Summer; Schoenwald, David; ...
2018-01-25
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosewater, David; Ferreira, Summer; Schoenwald, David
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
NASA Astrophysics Data System (ADS)
Agarwal, R. K.; Zhang, Z.; Zhu, C.
2013-12-01
For optimization of CO2 storage and reduced CO2 plume migration in saline aquifers, a genetic algorithm (GA) based optimizer has been developed which is combined with the DOE multi-phase flow and heat transfer numerical simulation code TOUGH2. Designated as GA-TOUGH2, this combined solver/optimizer has been verified by performing optimization studies on a number of model problems and comparing the results with brute-force optimization which requires a large number of simulations. Using GA-TOUGH2, an innovative reservoir engineering technique known as water-alternating-gas (WAG) injection has been investigated to determine the optimal WAG operation for enhanced CO2 storage capacity. The topmost layer (layer # 9) of Utsira formation at Sleipner Project, Norway is considered as a case study. A cylindrical domain, which possesses identical characteristics of the detailed 3D Utsira Layer #9 model except for the absence of 3D topography, was used. Topographical details are known to be important in determining the CO2 migration at Sleipner, and are considered in our companion model for history match of the CO2 plume migration at Sleipner. However, simplification on topography here, without compromising accuracy, is necessary to analyze the effectiveness of WAG operation on CO2 migration without incurring excessive computational cost. Selected WAG operation then can be simulated with full topography details later. We consider a cylindrical domain with thickness of 35 m with horizontal flat caprock. All hydrogeological properties are retained from the detailed 3D Utsira Layer #9 model, the most important being the horizontal-to-vertical permeability ratio of 10. Constant Gas Injection (CGI) operation with nine-year average CO2 injection rate of 2.7 kg/s is considered as the baseline case for comparison. The 30-day, 15-day, and 5-day WAG cycle durations are considered for the WAG optimization design. Our computations show that for the simplified Utsira Layer #9 model, the WAG operation with 5-day cycle leads to most noticeable reduction in plume migration. For 5-day WAG cycle, the values of design variables corresponding to optimal WAG operation are found as optimal CO2 injection ICO2,optimal = 11.56 kg/s, and optimal water injection Iwater,optimal = 7.62 kg/s. The durations of CO2 and water injection in one WAG cycle are 11 and 19 days, respectively. Identical WAG cycles are repeated 20 times to complete a two-year operation. Significant reduction (22%) in CO2 migration is achieved compared to CGI operation after only two years of WAG operation. In addition, CO2 dissolution is also significantly enhanced from about 9% to 22% of the total injected CO2 . The results obtained from this and other optimization studies suggest that over 50% reduction of in situ CO2 footprint, greatly enhanced CO2 dissolution, and significantly improved well injectivity can be achieved by employing GA-TOUGH2. The optimization code has also been employed to determine the optimal well placement in a multi-well injection operation. GA-TOUGH2 appears to hold great promise for studying a host of other optimization problems related to Carbon Storage.
NASA Astrophysics Data System (ADS)
Mousavi, Monireh Sadat; Ashrafi, Khosro; Motlagh, Majid Shafie Pour; Niksokhan, Mohhamad Hosein; Vosoughifar, HamidReza
2018-02-01
In this study, coupled method for simulation of flow pattern based on computational methods for fluid dynamics with optimization technique using genetic algorithms is presented to determine the optimal location and number of sensors in an enclosed residential complex parking in Tehran. The main objective of this research is costs reduction and maximum coverage with regard to distribution of existing concentrations in different scenarios. In this study, considering all the different scenarios for simulation of pollution distribution using CFD simulations has been challenging due to extent of parking and number of cars available. To solve this problem, some scenarios have been selected based on random method. Then, maximum concentrations of scenarios are chosen for performing optimization. CFD simulation outputs are inserted as input in the optimization model using genetic algorithm. The obtained results stated optimal number and location of sensors.
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
Shallow Turbulence in Rivers and Estuaries
2012-09-30
objectives are to: 1. Determine spatial patterns of shallow turbulence from in-situ and remote sensing data and investigate the effects and...production through a model parameter study, and determine the optimal model configuration that statistically reproduces the shallow turbulence...more probable cause. According to Nezu et al. (1993), longitudinal vorticity streets would cause alternating upwelling (boils) and down welling
Hong, Haoyuan; Tsangaratos, Paraskevas; Ilia, Ioanna; Liu, Junzhi; Zhu, A-Xing; Xu, Chong
2018-07-15
The main objective of the present study was to utilize Genetic Algorithms (GA) in order to obtain the optimal combination of forest fire related variables and apply data mining methods for constructing a forest fire susceptibility map. In the proposed approach, a Random Forest (RF) and a Support Vector Machine (SVM) was used to produce a forest fire susceptibility map for the Dayu County which is located in southwest of Jiangxi Province, China. For this purpose, historic forest fires and thirteen forest fire related variables were analyzed, namely: elevation, slope angle, aspect, curvature, land use, soil cover, heat load index, normalized difference vegetation index, mean annual temperature, mean annual wind speed, mean annual rainfall, distance to river network and distance to road network. The Natural Break and the Certainty Factor method were used to classify and weight the thirteen variables, while a multicollinearity analysis was performed to determine the correlation among the variables and decide about their usability. The optimal set of variables, determined by the GA limited the number of variables into eight excluding from the analysis, aspect, land use, heat load index, distance to river network and mean annual rainfall. The performance of the forest fire models was evaluated by using the area under the Receiver Operating Characteristic curve (ROC-AUC) based on the validation dataset. Overall, the RF models gave higher AUC values. Also the results showed that the proposed optimized models outperform the original models. Specifically, the optimized RF model gave the best results (0.8495), followed by the original RF (0.8169), while the optimized SVM gave lower values (0.7456) than the RF, however higher than the original SVM (0.7148) model. The study highlights the significance of feature selection techniques in forest fire susceptibility, whereas data mining methods could be considered as a valid approach for forest fire susceptibility modeling. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starling, K.E.; Mallinson, R.G.; Li, M.H.
The objective of this research is to examine the relationship between the calorimetric properties of coal liquids and their molecular functional group composition. Coal liquid samples which have had their calorimetric properties measured are characterized using proton NMR, ir and elemental analysis. These characterizations are then used in a chemical structural model to determine the composition of the coal liquid in terms of the important molecular functional groups. These functional groups are particularly important in determining the intramolecular based properties of a fluid, such as ideal gas heat capacities. Correlational frameworks for heat capacities will then be examined within anmore » existing equation of state methodology to determine an optimal correlation. Also, the optimal recipe for obtaining the characterization/chemical structure information and the sensitivity of the correlation to the characterization and structural model will be examined and determined. 7 refs.« less
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
Health benefit modelling and optimization of vehicular pollution control strategies
NASA Astrophysics Data System (ADS)
Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra
2012-12-01
This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific control strategies with maximization of social benefits, when these strategies are applied simultaneously.
Adaptive treatment-length optimization in spatiobiologically integrated radiotherapy
NASA Astrophysics Data System (ADS)
Ajdari, Ali; Ghate, Archis; Kim, Minsun
2018-04-01
Recent theoretical research on spatiobiologically integrated radiotherapy has focused on optimization models that adapt fluence-maps to the evolution of tumor state, for example, cell densities, as observed in quantitative functional images acquired over the treatment course. We propose an optimization model that adapts the length of the treatment course as well as the fluence-maps to such imaged tumor state. Specifically, after observing the tumor cell densities at the beginning of a session, the treatment planner solves a group of convex optimization problems to determine an optimal number of remaining treatment sessions, and a corresponding optimal fluence-map for each of these sessions. The objective is to minimize the total number of tumor cells remaining (TNTCR) at the end of this proposed treatment course, subject to upper limits on the biologically effective dose delivered to the organs-at-risk. This fluence-map is administered in future sessions until the next image is available, and then the number of sessions and the fluence-map are re-optimized based on the latest cell density information. We demonstrate via computer simulations on five head-and-neck test cases that such adaptive treatment-length and fluence-map planning reduces the TNTCR and increases the biological effect on the tumor while employing shorter treatment courses, as compared to only adapting fluence-maps and using a pre-determined treatment course length based on one-size-fits-all guidelines.
Optimizing Real-Time Vaccine Allocation in a Stochastic SIR Model
Nguyen, Chantal; Carlson, Jean M.
2016-01-01
Real-time vaccination following an outbreak can effectively mitigate the damage caused by an infectious disease. However, in many cases, available resources are insufficient to vaccinate the entire at-risk population, logistics result in delayed vaccine deployment, and the interaction between members of different cities facilitates a wide spatial spread of infection. Limited vaccine, time delays, and interaction (or coupling) of cities lead to tradeoffs that impact the overall magnitude of the epidemic. These tradeoffs mandate investigation of optimal strategies that minimize the severity of the epidemic by prioritizing allocation of vaccine to specific subpopulations. We use an SIR model to describe the disease dynamics of an epidemic which breaks out in one city and spreads to another. We solve a master equation to determine the resulting probability distribution of the final epidemic size. We then identify tradeoffs between vaccine, time delay, and coupling, and we determine the optimal vaccination protocols resulting from these tradeoffs. PMID:27043931
NASA Astrophysics Data System (ADS)
Widodo, Edy; Kariyam
2017-03-01
To determine the input variable settings that create the optimal compromise in response variable used Response Surface Methodology (RSM). There are three primary steps in the RSM problem, namely data collection, modelling, and optimization. In this study focused on the establishment of response surface models, using the assumption that the data produced is correct. Usually the response surface model parameters are estimated by OLS. However, this method is highly sensitive to outliers. Outliers can generate substantial residual and often affect the estimator models. Estimator models produced can be biased and could lead to errors in the determination of the optimal point of fact, that the main purpose of RSM is not reached. Meanwhile, in real life, the collected data often contain some response variable and a set of independent variables. Treat each response separately and apply a single response procedures can result in the wrong interpretation. So we need a development model for the multi-response case. Therefore, it takes a multivariate model of the response surface that is resistant to outliers. As an alternative, in this study discussed on M-estimation as a parameter estimator in multivariate response surface models containing outliers. As an illustration presented a case study on the experimental results to the enhancement of the surface layer of aluminium alloy air by shot peening.
NASA Astrophysics Data System (ADS)
Satti, S.; Zaitchik, B. F.; Siddiqui, S.; Badr, H. S.; Shukla, S.; Peters-Lidard, C. D.
2015-12-01
The unpredictable nature of precipitation within the East African (EA) region makes it one of the most vulnerable, food insecure regions in the world. There is a vital need for forecasts to inform decision makers, both local and regional, and to help formulate the region's climate change adaptation strategies. Here, we present a suite of different seasonal forecast models, both statistical and dynamical, for the EA region. Objective regionalization is performed for EA on the basis of interannual variability in precipitation in both observations and models. This regionalization is applied as the basis for calculating a number of standard skill scores to evaluate each model's forecast accuracy. A dynamically linked Land Surface Model (LSM) is then applied to determine forecasted flows, which drive the Sudanese Hydroeconomic Optimization Model (SHOM). SHOM combines hydrologic, agronomic and economic inputs to determine the optimal decisions that maximize economic benefits along the Sudanese Blue Nile. This modeling sequence is designed to derive the potential added value of information of each forecasting model to agriculture and hydropower management. A rank of each model's forecasting skill score along with its added value of information is analyzed in order compare the performance of each forecast. This research aims to improve understanding of how characteristics of accuracy, lead time, and uncertainty of seasonal forecasts influence their utility to water resources decision makers who utilize them.
An atomic model of brome mosaic virus using direct electron detection and real-space optimization.
Wang, Zhao; Hryc, Corey F; Bammes, Benjamin; Afonine, Pavel V; Jakana, Joanita; Chen, Dong-Hua; Liu, Xiangan; Baker, Matthew L; Kao, Cheng; Ludtke, Steven J; Schmid, Michael F; Adams, Paul D; Chiu, Wah
2014-09-04
Advances in electron cryo-microscopy have enabled structure determination of macromolecules at near-atomic resolution. However, structure determination, even using de novo methods, remains susceptible to model bias and overfitting. Here we describe a complete workflow for data acquisition, image processing, all-atom modelling and validation of brome mosaic virus, an RNA virus. Data were collected with a direct electron detector in integrating mode and an exposure beyond the traditional radiation damage limit. The final density map has a resolution of 3.8 Å as assessed by two independent data sets and maps. We used the map to derive an all-atom model with a newly implemented real-space optimization protocol. The validity of the model was verified by its match with the density map and a previous model from X-ray crystallography, as well as the internal consistency of models from independent maps. This study demonstrates a practical approach to obtain a rigorously validated atomic resolution electron cryo-microscopy structure.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Determination of the optimal area of waste incineration in a rotary kiln using a simulation model.
Bujak, J
2015-08-01
The article presents a mathematical model to determine the flux of incinerated waste in terms of its calorific values. The model is applicable in waste incineration systems equipped with rotary kilns. It is based on the known and proven energy flux balances and equations that describe the specific losses of energy flux while considering the specificity of waste incineration systems. The model is universal as it can be used both for the analysis and testing of systems burning different types of waste (municipal, medical, animal, etc.) and for allowing the use of any kind of additional fuel. Types of waste incinerated and additional fuel are identified by a determination of their elemental composition. The computational model has been verified in three existing industrial-scale plants. Each system incinerated a different type of waste. Each waste type was selected in terms of a different calorific value. This allowed the full verification of the model. Therefore the model can be used to optimize the operation of waste incineration system both at the design stage and during its lifetime. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia
2018-02-01
The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.
Optimization of radial-type superconducting magnetic bearing using the Taguchi method
NASA Astrophysics Data System (ADS)
Ai, Liwang; Zhang, Guomin; Li, Wanjie; Liu, Guole; Liu, Qi
2018-07-01
It is important and complicated to model and optimize the levitation behavior of superconducting magnetic bearing (SMB). That is due to the nonlinear constitutive relationships of superconductor and ferromagnetic materials, the relative movement between the superconducting stator and PM rotor, and the multi-parameter (e.g., air-gap, critical current density, and remanent flux density, etc.) affecting the levitation behavior. In this paper, we present a theoretical calculation and optimization method of the levitation behavior for radial-type SMB. A simplified model of levitation force calculation is established using 2D finite element method with H-formulation. In the model, the boundary condition of superconducting stator is imposed by harmonic series expressions to describe the traveling magnetic field generated by the moving PM rotor. Also, experimental measurements of the levitation force are performed and validate the model method. A statistical method called Taguchi method is adopted to carry out an optimization of load capacity for SMB. Then the factor effects of six optimization parameters on the target characteristics are discussed and the optimum parameters combination is determined finally. The results show that the levitation behavior of SMB is greatly improved and the Taguchi method is suitable for optimizing the SMB.
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Analysis and Evaluation of Parameters Determining Maximum Efficiency of Fish Protection
NASA Astrophysics Data System (ADS)
Khetsuriani, E. D.; Kostyukov, V. P.; Khetsuriani, T. E.
2017-11-01
The article is concerned with experimental research findings. The efficiency of fish fry protection from entering water inlets is the main criterion of any fish protection facility or device. The research was aimed to determine an adequate mathematical model E = f(PCT, Vp, α), where PCT, Vp and α are controlled factors influencing the process of fish fry protection. The result of the processing of experimental data was an adequate regression model. We determined the maximum of fish protection Emax=94,21 and the minimum of optimization function Emin=44,41. As a result of the statistical processing of experimental data we obtained adequate dependences for determining an optimal rotational speed of tip and fish protection efficiency. The analysis of fish protection efficiency dependence E% = f(PCT, Vp, α) allowed the authors to recommend the following optimized operating modes for it: the maximum fish protection efficiency is achieved at the process pressure PCT=3 atm, stream velocity Vp=0,42 m/s and nozzle inclination angle α=47°49’. The stream velocity Vp has the most critical influence on fish protection efficiency. The maximum efficiency of fish protection is obtained at the tip rotational speed of 70.92 rpm.
Esteghamati, Alireza; Ashraf, Haleh; Khalilzadeh, Omid; Zandieh, Ali; Nakhjavani, Manouchehr; Rashidi, Armin; Haghazali, Mehrdad; Asgari, Fereshteh
2010-04-07
We have recently determined the optimal cut-off of the homeostatic model assessment of insulin resistance for the diagnosis of insulin resistance (IR) and metabolic syndrome (MetS) in non-diabetic residents of Tehran, the capital of Iran. The aim of the present study is to establish the optimal cut-off at the national level in the Iranian population with and without diabetes. Data of the third National Surveillance of Risk Factors of Non-Communicable Diseases, available for 3,071 adult Iranian individuals aging 25-64 years were analyzed. MetS was defined according to the Adult Treatment Panel III (ATPIII) and International Diabetes Federation (IDF) criteria. HOMA-IR cut-offs from the 50th to the 95th percentile were calculated and sensitivity, specificity, and positive likelihood ratio for MetS diagnosis were determined. The receiver operating characteristic (ROC) curves of HOMA-IR for MetS diagnosis were depicted, and the optimal cut-offs were determined by two different methods: Youden index, and the shortest distance from the top left corner of the curve. The area under the curve (AUC) (95%CI) was 0.650 (0.631-0.670) for IDF-defined MetS and 0.683 (0.664-0.703) with the ATPIII definition. The optimal HOMA-IR cut-off for the diagnosis of IDF- and ATPIII-defined MetS in non-diabetic individuals was 1.775 (sensitivity: 57.3%, specificity: 65.3%, with ATPIII; sensitivity: 55.9%, specificity: 64.7%, with IDF). The optimal cut-offs in diabetic individuals were 3.875 (sensitivity: 49.7%, specificity: 69.6%) and 4.325 (sensitivity: 45.4%, specificity: 69.0%) for ATPIII- and IDF-defined MetS, respectively. We determined the optimal HOMA-IR cut-off points for the diagnosis of MetS in the Iranian population with and without diabetes.
Topology synthesis and size optimization of morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku
This research demonstrates a novel topology and size optimization methodology for synthesis of distributed actuation systems with specific applications to morphing air vehicle structures. The main emphasis is placed on the topology and size optimization problem formulations and the development of computational modeling concepts. The analysis model is developed to meet several important criteria: It must allow a rigid-body displacement, as well as a variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Topology optimization is performed on a semi-ground structure with design variables that control the system configuration. In effect, the optimization process assigns morphing members as "soft" elements, non-morphing load-bearing members as "stiff' elements, and non-existent members as "voids." The optimization process also determines the optimum actuator placement, where each actuator is represented computationally by equal and opposite nodal forces with soft axial stiffness. In addition, the configuration of attachments that connect the morphing structure to a non-morphing structure is determined simultaneously. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of the formulations. Extensions and enhancements to the initial concept and problem formulations are made to accommodate multiple-configuration definitions. In addition, the principal issues on the external-load dependency and the reversibility of a design, as well as the appropriate selection of a reference configuration, are addressed in the research. The methodology to control actuator distributions and concentrations is also discussed. Finally, the strategy to transfer the topology solution to the sizing optimization is developed and cross-sectional areas of existent structural members are optimized under applied aerodynamic loads. That is, the optimization process is implemented in sequential order: The actuation system layout is first determined through multi-disciplinary topology optimization process, and then the thickness or cross-sectional area of each existent member is optimized under given constraints and boundary conditions. Sample problems are solved to demonstrate the potential capabilities of the presented methodology. The research demonstrates an innovative structural design procedure from a computational perspective and opens new insights into the potential design requirements and characteristics of morphing structures.
NASA Astrophysics Data System (ADS)
Octarina, Sisca; Radiana, Mutia; Bangun, Putra B. J.
2018-01-01
Two dimensional cutting stock problem (CSP) is a problem in determining the cutting pattern from a set of stock with standard length and width to fulfill the demand of items. Cutting patterns were determined in order to minimize the usage of stock. This research implemented pattern generation algorithm to formulate Gilmore and Gomory model of two dimensional CSP. The constraints of Gilmore and Gomory model was performed to assure the strips which cut in the first stage will be used in the second stage. Branch and Cut method was used to obtain the optimal solution. Based on the results, it found many patterns combination, if the optimal cutting patterns which correspond to the first stage were combined with the second stage.
Analysis of an inventory model for both linearly decreasing demand and holding cost
NASA Astrophysics Data System (ADS)
Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.
2016-03-01
This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.
Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe
2016-01-01
Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young's modulus values. For each combination of these variables, the explicit equation of the porosity distribution law-i.e the law that describes the pore dimensions in function of the spatial coordinates-was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry of functionally graded scaffolds based on mechanobiological criteria.
Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H
2003-01-01
Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935
NASA Astrophysics Data System (ADS)
Sharqawy, Mostafa H.
2016-12-01
Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.
A new enhanced index tracking model in portfolio optimization with sum weighted approach
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng
2017-04-01
Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.
A kriging metamodel-assisted robust optimization method based on a reverse model
NASA Astrophysics Data System (ADS)
Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao
2018-02-01
The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.
Vector boson fusion in the inert doublet model
NASA Astrophysics Data System (ADS)
Dutta, Bhaskar; Palacio, Guillermo; Restrepo, Diego; Ruiz-Álvarez, José D.
2018-03-01
In this paper we probe the inert Higgs doublet model at the LHC using vector boson fusion (VBF) search strategy. We optimize the selection cuts and investigate the parameter space of the model and we show that the VBF search has a better reach when compared with the monojet searches. We also investigate the Drell-Yan type cuts and show that they can be important for smaller charged Higgs masses. We determine the 3 σ reach for the parameter space using these optimized cuts for a luminosity of 3000 fb-1 .
Šumić, Zdravko; Vakula, Anita; Tepić, Aleksandra; Čakarević, Jelena; Vitas, Jasmina; Pavlić, Branimir
2016-07-15
Fresh red currants were dried by vacuum drying process under different drying conditions. Box-Behnken experimental design with response surface methodology was used for optimization of drying process in terms of physical (moisture content, water activity, total color change, firmness and rehydratation power) and chemical (total phenols, total flavonoids, monomeric anthocyanins and ascorbic acid content and antioxidant activity) properties of dried samples. Temperature (48-78 °C), pressure (30-330 mbar) and drying time (8-16 h) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where regression analysis and analysis of variance were used to determine model fitness and optimal drying conditions. The optimal conditions of simultaneously optimized responses were temperature of 70.2 °C, pressure of 39 mbar and drying time of 8 h. It could be concluded that vacuum drying provides samples with good physico-chemical properties, similar to lyophilized sample and better than conventionally dried sample. Copyright © 2016 Elsevier Ltd. All rights reserved.
Parameter optimization for surface flux transport models
NASA Astrophysics Data System (ADS)
Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.
2017-11-01
Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.
Evans, M. D. R.; Kelley, Paul; Kelley, Jonathan
2017-01-01
University days generally start at fixed times in the morning, often early morning, without regard to optimal functioning times for students with different chronotypes. Research has shown that later starting times are crucial to high school students' sleep, health, and performance. Shifting the focus to university, this study used two new approaches to determine ranges of start times that optimize cognitive functioning for undergraduates. The first is a survey-based, empirical model (SM), and the second a neuroscience-based, theoretical model (NM). The SM focused on students' self-reported chronotype and times they feel at their best. Using this approach, data from 190 mostly first and second year university students were collected and analyzed to determine optimal times when cognitive performance can be expected to be at its peak. The NM synthesized research in sleep, circadian neuroscience, sleep deprivation's impact on cognition, and practical considerations to create a generalized solution to determine the best learning hours. Strikingly the SM and NM results align with each other and confirm other recent research in indicating later start times. They add several important points: (1) They extend our understanding by showing that much later starting times (after 11 a.m. or 12 noon) are optimal; (2) Every single start time disadvantages one or more chronotypes; and (3) The best practical model may involve three alternative starting times with one afternoon shared session. The implications are briefly considered. PMID:28469566
Hydraulic containment: analytical and semi-analytical models for capture zone curve delineation
NASA Astrophysics Data System (ADS)
Christ, John A.; Goltz, Mark N.
2002-05-01
We present an efficient semi-analytical algorithm that uses complex potential theory and superposition to delineate the capture zone curves of extraction wells. This algorithm is more flexible than previously published techniques and allows the user to determine the capture zone for a number of arbitrarily positioned extraction wells pumping at different rates. The algorithm is applied to determine the capture zones and optimal well spacing of two wells pumping at different flow rates and positioned at various orientations to the direction of regional groundwater flow. The algorithm is also applied to determine capture zones for non-colinear three-well configurations as well as to determine optimal well spacing for up to six wells pumping at the same rate. We show that the optimal well spacing is found by minimizing the difference in the stream function evaluated at the stagnation points.
Optimizing Storage and Renewable Energy Systems with REopt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elgqvist, Emma M.; Anderson, Katherine H.; Cutler, Dylan S.
Under the right conditions, behind the meter (BTM) storage combined with renewable energy (RE) technologies can provide both cost savings and resiliency. Storage economics depend not only on technology costs and avoided utility rates, but also on how the technology is operated. REopt, a model developed at NREL, can be used to determine the optimal size and dispatch strategy for BTM or off-grid applications. This poster gives an overview of three applications of REopt: Optimizing BTM Storage and RE to Extend Probability of Surviving Outage, Optimizing Off-Grid Energy System Operation, and Optimizing Residential BTM Solar 'Plus'.
Gravity inversion of a fault by Particle swarm optimization (PSO).
Toushmalani, Reza
2013-01-01
Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.
Research on connection structure of aluminumbody bus using multi-objective topology optimization
NASA Astrophysics Data System (ADS)
Peng, Q.; Ni, X.; Han, F.; Rhaman, K.; Ulianov, C.; Fang, X.
2018-01-01
For connecting Aluminum Alloy bus body aluminum components often occur the problem of failure, a new aluminum alloy connection structure is designed based on multi-objective topology optimization method. Determining the shape of the outer contour of the connection structure with topography optimization, establishing a topology optimization model of connections based on SIMP density interpolation method, going on multi-objective topology optimization, and improving the design of the connecting piece according to the optimization results. The results show that the quality of the aluminum alloy connector after topology optimization is reduced by 18%, and the first six natural frequencies are improved and the strength performance and stiffness performance are obviously improved.
Fast machine-learning online optimization of ultra-cold-atom experiments.
Wigley, P B; Everitt, P J; van den Hengel, A; Bastian, J W; Sooriyabandara, M A; McDonald, G D; Hardman, K S; Quinlivan, C D; Manju, P; Kuhn, C C N; Petersen, I R; Luiten, A N; Hope, J J; Robins, N P; Hush, M R
2016-05-16
We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.
Fast machine-learning online optimization of ultra-cold-atom experiments
Wigley, P. B.; Everitt, P. J.; van den Hengel, A.; Bastian, J. W.; Sooriyabandara, M. A.; McDonald, G. D.; Hardman, K. S.; Quinlivan, C. D.; Manju, P.; Kuhn, C. C. N.; Petersen, I. R.; Luiten, A. N.; Hope, J. J.; Robins, N. P.; Hush, M. R.
2016-01-01
We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system. PMID:27180805
Optimal design of geodesically stiffened composite cylindrical shells
NASA Technical Reports Server (NTRS)
Gendron, G.; Guerdal, Z.
1992-01-01
An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.
Tsai, Kuo-Ming; Wang, He-Yi
2014-08-20
This study focuses on injection molding process window determination for obtaining optimal imaging optical properties, astigmatism, coma, and spherical aberration using plastic lenses. The Taguchi experimental method was first used to identify the optimized combination of parameters and significant factors affecting the imaging optical properties of the lens. Full factorial experiments were then implemented based on the significant factors to build the response surface models. The injection molding process windows for lenses with optimized optical properties were determined based on the surface models, and confirmation experiments were performed to verify their validity. The results indicated that the significant factors affecting the optical properties of lenses are mold temperature, melt temperature, and cooling time. According to experimental data for the significant factors, the oblique ovals for different optical properties on the injection molding process windows based on melt temperature and cooling time can be obtained using the curve fitting approach. The confirmation experiments revealed that the average errors for astigmatism, coma, and spherical aberration are 3.44%, 5.62%, and 5.69%, respectively. The results indicated that the process windows proposed are highly reliable.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer.
Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems
NASA Astrophysics Data System (ADS)
Kreuder, John J.
Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.
Can Subjects be Guided to Optimal Decisions The Use of a Real-Time Training Intervention Model
2016-06-01
execution of the task and may then be analyzed to determine if there is correlation between designated factors (scores, proportion of time in each...state with their decision performance in real time could allow training systems to be designed to tailor training to the individual decision maker...release; distribution is unlimited CAN SUBJECTS BE GUIDED TO OPTIMAL DECISIONS? THE USE OF A REAL- TIME TRAINING INTERVENTION MODEL by Travis D
Vastrad, B. M.; Neelagund, S. E.
2014-01-01
Neomycin production of Streptomyces fradiae NCIM 2418 was optimized by using response surface methodology (RSM), which is powerful mathematical approach comprehensively applied in the optimization of solid state fermentation processes. In the first step of optimization, with Placket-Burman design, ammonium chloride, sodium nitrate, L-histidine, and ammonium nitrate were established to be the crucial nutritional factors affecting neomycin production significantly. In the second step, a 24 full factorial central composite design and RSM were applied to determine the optimal concentration of significant variable. A second-order polynomial was determined by the multiple regression analysis of the experimental data. The optimum values for the important nutrients for the maximum were obtained as follows: ammonium chloride 2.00%, sodium nitrate 1.50%, L-histidine 0.250%, and ammonium nitrate 0.250% with a predicted value of maximum neomycin production of 20,000 g kg−1 dry coconut oil cake. Under the optimal condition, the practical neomycin production was 19,642 g kg−1 dry coconut oil cake. The determination coefficient (R 2) was 0.9232, which ensures an acceptable admissibility of the model. PMID:25009746
Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.
Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir
2018-04-01
In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
Optimizing Chemical Reactions with Deep Reinforcement Learning
2017-01-01
Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability. PMID:29296675
Schubert, M; Fey, A; Ihssen, J; Civardi, C; Schwarze, F W M R; Mourad, S
2015-01-10
An artificial neural network (ANN) and genetic algorithm (GA) were applied to improve the laccase-mediated oxidation of iodide (I(-)) to elemental iodine (I2). Biosynthesis of iodine (I2) was studied with a 5-level-4-factor central composite design (CCD). The generated ANN network was mathematically evaluated by several statistical indices and revealed better results than a classical quadratic response surface (RS) model. Determination of the relative significance of model input parameters, ranking the process parameters in order of importance (pH>laccase>mediator>iodide), was performed by sensitivity analysis. ANN-GA methodology was used to optimize the input space of the neural network model to find optimal settings for the laccase-mediated synthesis of iodine. ANN-GA optimized parameters resulted in a 9.9% increase in the conversion rate. Copyright © 2014 Elsevier B.V. All rights reserved.
Optimal experimental designs for fMRI when the model matrix is uncertain.
Kao, Ming-Hung; Zhou, Lin
2017-07-15
This study concerns optimal designs for functional magnetic resonance imaging (fMRI) experiments when the model matrix of the statistical model depends on both the selected stimulus sequence (fMRI design), and the subject's uncertain feedback (e.g. answer) to each mental stimulus (e.g. question) presented to her/him. While practically important, this design issue is challenging. This mainly is because that the information matrix cannot be fully determined at the design stage, making it difficult to evaluate the quality of the selected designs. To tackle this challenging issue, we propose an easy-to-use optimality criterion for evaluating the quality of designs, and an efficient approach for obtaining designs optimizing this criterion. Compared with a previously proposed method, our approach requires a much less computing time to achieve designs with high statistical efficiencies. Copyright © 2017 Elsevier Inc. All rights reserved.
Determination of stresses in RC eccentrically compressed members using optimization methods
NASA Astrophysics Data System (ADS)
Lechman, Marek; Stachurski, Andrzej
2018-01-01
The paper presents an optimization method for determining the strains and stresses in reinforced concrete (RC) members subjected to the eccentric compression. The governing equations for strains in the rectangular cross-sections are derived by integrating the equilibrium equations of cross-sections, taking account of the effect of concrete softening in plastic range and the mean compressive strength of concrete. The stress-strain relationship for concrete in compression for short term uniaxial loading is assumed according to Eurocode 2 for nonlinear analysis. For reinforcing steel linear-elastic model with hardening in plastic range is applied. The task consists in the solving the set of the derived equations s.t. box constraints. The resulting problem was solved by means of fmincon function implemented from the Matlab's Optimization Toolbox. Numerical experiments have shown the existence of many points verifying the equations with a very good accuracy. Therefore, some operations from the global optimization were included: start of fmincon from many points and clusterization. The model is verified on the set of data encountered in the engineering practice.
An Analytical Approach to Salary Evaluation for Educational Personnel
ERIC Educational Resources Information Center
Bruno, James Edward
1969-01-01
"In this study a linear programming model for determining an 'optimal' salary schedule was derived then applied to an educational salary structure. The validity of the model and the effectiveness of the approach were established. (Author)
NASA Astrophysics Data System (ADS)
Kozioł, Michał
2017-10-01
The article presents a parametric model describing the registered distributions spectrum of optical radiation emitted by electrical discharges generated in the systems: the needle- needle, the needleplate and in the system for surface discharges. Generation of electrical discharges and registration of the emitted radiation was carried out in three different electrical insulating oils: fabric new, operated (used) and operated with air bubbles. For registration of optical spectra in the range of ultraviolet, visible and near infrared a high resolution spectrophotometer was. The proposed mathematical model was developed in a regression procedure using gauss-sigmoid type function. The dependent variable was the intensity of the recorded optical signals. In order to estimate the optimal parameters of the model an evolutionary algorithm was used. The optimization procedure was performed in Matlab environment. For determination of the matching quality of theoretical parameters of the regression function to the empirical data determination coefficient R2 was applied.
NASA Astrophysics Data System (ADS)
Ren, Tao; Modest, Michael F.; Fateev, Alexander; Clausen, Sønnik
2015-01-01
In this study, we present an inverse calculation model based on the Levenberg-Marquardt optimization method to reconstruct temperature and species concentration from measured line-of-sight spectral transmissivity data for homogeneous gaseous media. The high temperature gas property database HITEMP 2010 (Rothman et al. (2010) [1]), which contains line-by-line (LBL) information for several combustion gas species, such as CO2 and H2O, was used to predict gas spectral transmissivities. The model was validated by retrieving temperatures and species concentrations from experimental CO2 and H2O transmissivity measurements. Optimal wavenumber ranges for CO2 and H2O transmissivity measured across a wide range of temperatures and concentrations were determined according to the performance of inverse calculations. Results indicate that the inverse radiation model shows good feasibility for measurements of temperature and gas concentration.
NASA Astrophysics Data System (ADS)
Aksoy, A.; Lee, J. H.; Kitanidis, P. K.
2016-12-01
Heterogeneity in hydraulic conductivity (K) impacts the transport and fate of contaminants in subsurface as well as design and operation of managed aquifer recharge (MAR) systems. Recently, improvements in computational resources and availability of big data through electrical resistivity tomography (ERT) and remote sensing have provided opportunities to better characterize the subsurface. Yet, there is need to improve prediction and evaluation methods in order to obtain information from field measurements for better field characterization. In this study, genetic algorithm optimization, which has been widely used in optimal aquifer remediation designs, was used to determine the spatial distribution of K. A hypothetical 2 km by 2 km aquifer was considered. A genetic algorithm library, PGAPack, was linked with a fast Fourier transform based random field generator as well as a groundwater flow and contaminant transport simulation model (BIO2D-KE). The objective of the optimization model was to minimize the total squared error between measured and predicted field values. It was assumed measured K values were available through ERT. Performance of genetic algorithm in predicting the distribution of K was tested for different cases. In the first one, it was assumed that observed K values were evaluated using the random field generator only as the forward model. In the second case, as well as K-values obtained through ERT, measured head values were incorporated into evaluation in which BIO2D-KE and random field generator were used as the forward models. Lastly, tracer concentrations were used as additional information in the optimization model. Initial results indicated enhanced performance when random field generator and BIO2D-KE are used in combination in predicting the spatial distribution in K.
NASA Astrophysics Data System (ADS)
Salmin, Vadim V.
2017-01-01
Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.
Optimization for Service Routes of Pallet Service Center Based on the Pallet Pool Mode
He, Shiwei; Song, Rui
2016-01-01
Service routes optimization (SRO) of pallet service center should meet customers' demand firstly and then, through the reasonable method of lines organization, realize the shortest path of vehicle driving. The routes optimization of pallet service center is similar to the distribution problems of vehicle routing problem (VRP) and Chinese postman problem (CPP), but it has its own characteristics. Based on the relevant research results, the conditions of determining the number of vehicles, the one way of the route, the constraints of loading, and time windows are fully considered, and a chance constrained programming model with stochastic constraints is constructed taking the shortest path of all vehicles for a delivering (recycling) operation as an objective. For the characteristics of the model, a hybrid intelligent algorithm including stochastic simulation, neural network, and immune clonal algorithm is designed to solve the model. Finally, the validity and rationality of the optimization model and algorithm are verified by the case. PMID:27528865
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
Honing process optimization algorithms
NASA Astrophysics Data System (ADS)
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
NASA Astrophysics Data System (ADS)
Lohani, S.; Heilman, P.; deSteiguer, J. E.; Guertin, D. P.; Wissler, C.; McClaran, M. P.
2014-12-01
Quantifying ecosystem services is a crucial topic for land management decision making. However, market prices are usually not able to capture all the ecosystem services and disservices. Ecosystem services from rangelands, that cover 70% of the world's land area, are even less well-understood since knowledge of rangelands is limited. This study generated a management framework for rangelands that uses remote sensing to generate state and transition models (STMs) for a large area and a linear programming (LP) model that uses ecosystem services to evaluate natural and/or management induced transitions as described in the STM. The LP optimization model determines the best management plan for a plot of semi-arid land in the Empire Ranch in southeastern Arizona. The model allocated land among management activities (do nothing, grazing, fire, and brush removal) to optimize net benefits and determined the impact of monetizing environmental services and disservices on net benefits, acreage allocation and production output. The ecosystem services under study were forage production (AUM/ac/yr), sediment (lbs/ac/yr), water runoff (inches/yr), soil loss (lbs/ac/yr) and recreation (thousands of number of visitors/ac/yr). The optimization model was run for three different scenarios - private rancher, public rancher including environmental services and excluding disservices, and public rancher including both services and disservices. The net benefit was the highest for the public rancher excluding the disservices. A result from the study is a constrained optimization model that incorporates ecosystem services to analyze investments on conservation and management activities. Rangeland managers can use this model to understand and explain, not prescribe, the tradeoffs of management investments.
Arefi-Oskoui, Samira; Khataee, Alireza; Vatanpour, Vahid
2017-07-10
In this research, MgAl-CO 3 2- nanolayered double hydroxide (NLDH) was synthesized through a facile coprecipitation method, followed by a hydrothermal treatment. The prepared NLDHs were used as a hydrophilic nanofiller for improving the performance of the PVDF-based ultrafiltration membranes. The main objective of this research was to obtain the optimized formula of NLDH/PVDF nanocomposite membrane presenting the best performance using computational techniques as a cost-effective method. For this aim, an artificial neural network (ANN) model was developed for modeling and expressing the relationship between the performance of the nanocomposite membrane (pure water flux, protein flux and flux recovery ratio) and the affecting parameters including the NLDH, PVP 29000 and polymer concentrations. The effects of the mentioned parameters and the interaction between the parameters were investigated using the contour plot predicted with the developed model. Scanning electron microscopy (SEM), atomic force microscopy (AFM), and water contact angle techniques were applied to characterize the nanocomposite membranes and to interpret the predictions of the ANN model. The developed ANN model was introduced to genetic algorithm (GA) as a bioinspired optimizer to determine the optimum values of input parameters leading to high pure water flux, protein flux, and flux recovery ratio. The optimum values for NLDH, PVP 29000 and the PVDF concentration were determined to be 0.54, 1, and 18 wt %, respectively. The performance of the nanocomposite membrane prepared using the optimum values proposed by GA was investigated experimentally, in which the results were in good agreement with the values predicted by ANN model with error lower than 6%. This good agreement confirmed that the nanocomposite membranes prformance could be successfully modeled and optimized by ANN-GA system.
Sluiter, Amie; Sluiter, Justin; Wolfrum, Ed; ...
2016-05-20
Accurate and precise chemical characterization of biomass feedstocks and process intermediates is a requirement for successful technical and economic evaluation of biofuel conversion technologies. The uncertainty in primary measurements of the fraction insoluble solid (FIS) content of dilute acid pretreated corn stover slurry is the major contributor to uncertainty in yield calculations for enzymatic hydrolysis of cellulose to glucose. This uncertainty is propagated through process models and impacts modeled fuel costs. The challenge in measuring FIS is obtaining an accurate measurement of insoluble matter in the pretreated materials, while appropriately accounting for all biomass derived components. Three methods were testedmore » to improve this measurement. One used physical separation of liquid and solid phases, and two utilized direct determination of dry matter content in two fractions. We offer a comparison of drying methods. Lastly, our results show utilizing a microwave dryer to directly determine dry matter content is the optimal method for determining FIS, based on the low time requirements and the method optimization done using model slurries.« less
Optimizing snake locomotion on an inclined plane
NASA Astrophysics Data System (ADS)
Wang, Xiaolin; Osborne, Matthew T.; Alben, Silas
2014-01-01
We develop a model to study the locomotion of snakes on inclined planes. We determine numerically which snake motions are optimal for two retrograde traveling-wave body shapes, triangular and sinusoidal waves, across a wide range of frictional parameters and incline angles. In the regime of large transverse friction coefficients, we find power-law scalings for the optimal wave amplitudes and corresponding costs of locomotion. We give an asymptotic analysis to show that the optimal snake motions are traveling waves with amplitudes given by the same scaling laws found in the numerics.
NASA Astrophysics Data System (ADS)
Tang, F. R.; Zhang, Rong; Li, Huichao; Li, C. N.; Liu, Wei; Bai, Long
2018-05-01
The trade-off criterion is used to systemically investigate the performance features of two chemical engine models (the low-dissipation model and the endoreversible model). The optimal efficiencies, the dissipation ratios, and the corresponding ratios of the dissipation rates for two models are analytically determined. Furthermore, the performance properties of two kinds of chemical engines are precisely compared and analyzed, and some interesting physics is revealed. Our investigations show that the certain universal equivalence between two models is within the framework of the linear irreversible thermodynamics, and their differences are rooted in the different physical contexts. Our results can contribute to a precise understanding of the general features of chemical engines.
Process Parameter Optimization for Wobbling Laser Spot Welding of Ti6Al4V Alloy
NASA Astrophysics Data System (ADS)
Vakili-Farahani, F.; Lungershausen, J.; Wasmer, K.
Laser beam welding (LBW) coupled with "wobble effect" (fast oscillation of the laser beam) is very promising for high precision micro-joining industry. For this process, similarly to the conventional LBW, the laser welding process parameters play a very significant role in determining the quality of a weld joint. Consequently, four process parameters (laser power, wobble frequency, number of rotations within a single laser pulse and focused position) and 5 responses (penetration, width, heat affected zone (HAZ), area of the fusion zone, area of HAZ and hardness) were investigated for spot welding of Ti6Al4V alloy (grade 5) using a design of experiments (DoE) approach. This paper presents experimental results showing the effects of variating the considered most important process parameters on the spot weld quality of Ti6Al4V alloy. Semi-empirical mathematical models were developed to correlate laser welding parameters to each of the measured weld responses. Adequacies of the models were then examined by various methods such as ANOVA. These models not only allows a better understanding of the wobble laser welding process and predict the process performance but also determines optimal process parameters. Therefore, optimal combination of process parameters was determined considering certain quality criteria set.
Optimization of a hardware implementation for pulse coupled neural networks for image applications
NASA Astrophysics Data System (ADS)
Gimeno Sarciada, Jesús; Lamela Rivera, Horacio; Warde, Cardinal
2010-04-01
Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process. In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed and a similar circuital model is also designed. Both are then used to determine the optimal values of the several parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.
Optimized dispatch in a first-principles concentrating solar power production model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Michael J.; Newman, Alexandra M.; Hamilton, William T.
Concentrating solar power towers, which include a steam-Rankine cycle with molten salt thermal energy storage, is an emerging technology whose maximum effectiveness relies on an optimal operational and dispatch policy. Given parameters such as start-up and shut-down penalties, expected electricity price profiles, solar availability, and system interoperability requirements, this paper seeks a profit-maximizing solution that determines start-up and shut-down times for the power cycle and solar receiver, and the times at which to dispatch stored and instantaneous quantities of energy over a 48-h horizon at hourly fidelity. The mixed-integer linear program (MIP) is subject to constraints including: (i) minimum andmore » maximum rates of start-up and shut-down, (ii) energy balance, including energetic state of the system as a whole and its components, (iii) logical rules governing the operational modes of the power cycle and solar receiver, and (iv) operational consistency between time periods. The novelty in this work lies in the successful integration of a dispatch optimization model into a detailed techno-economic analysis tool, specifically, the National Renewable Energy Laboratory's System Advisor Model (SAM). The MIP produces an optimized operating strategy, historically determined via a heuristic. Using several market electricity pricing profiles, we present comparative results for a system with and without dispatch optimization, indicating that dispatch optimization can improve plant profitability by 5-20% and thereby alter the economics of concentrating solar power technology. While we examine a molten salt power tower system, this analysis is equally applicable to the more mature concentrating solar parabolic trough system with thermal energy storage.« less
Optimization of pressure gauge locations for water distribution systems using entropy theory.
Yoo, Do Guen; Chang, Dong Eil; Jun, Hwandon; Kim, Joong Hoon
2012-12-01
It is essential to select the optimal pressure gauge location for effective management and maintenance of water distribution systems. This study proposes an objective and quantified standard for selecting the optimal pressure gauge location by defining the pressure change at other nodes as a result of demand change at a specific node using entropy theory. Two cases are considered in terms of demand change: that in which demand at all nodes shows peak load by using a peak factor and that comprising the demand change of the normal distribution whose average is the base demand. The actual pressure change pattern is determined by using the emitter function of EPANET to reflect the pressure that changes practically at each node. The optimal pressure gauge location is determined by prioritizing the node that processes the largest amount of information it gives to (giving entropy) and receives from (receiving entropy) the whole system according to the entropy standard. The suggested model is applied to one virtual and one real pipe network, and the optimal pressure gauge location combination is calculated by implementing the sensitivity analysis based on the study results. These analysis results support the following two conclusions. Firstly, the installation priority of the pressure gauge in water distribution networks can be determined with a more objective standard through the entropy theory. Secondly, the model can be used as an efficient decision-making guide for gauge installation in water distribution systems.
Optimizing the Distribution of Leg Muscles for Vertical Jumping
Wong, Jeremy D.; Bobbert, Maarten F.; van Soest, Arthur J.; Gribble, Paul L.; Kistemaker, Dinant A.
2016-01-01
A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas—which determine the maximum force deliverable by the muscles—constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal segments. PMID:26919645
Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)
NASA Astrophysics Data System (ADS)
Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand
2018-03-01
Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.
DeSmitt, Holly J; Domire, Zachary J
2016-12-01
Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.
NASA Astrophysics Data System (ADS)
Abdeh-Kolahchi, A.; Satish, M.; Datta, B.
2004-05-01
A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.
Pricing and inventory policies for Hi-tech products under replacement warranty
NASA Astrophysics Data System (ADS)
Tsao, Yu-Chung; Teng, Wei-Guang; Chen, Ruey-Shii; Chou, Wang-Ying
2014-06-01
Companies, especially in the Hi-tech (high-technology) industry (such as computer, communication and consumer electronic products), often provide a replacement warranty period for purchased items. In reality, simultaneously determining the price and inventory decisions under warranty policy is an important issue. The objective of this paper is to develop a joint pricing and inventory model for Hi-tech products under replacement warranty policy. In the first model, we consider a Hi-tech product feature in which the selling price is declining in a trend. We determine the optimal inventory level for each period and retail price for the first period while maximising the total profit. In the second model, we further determine the optimal retail price and inventory level for each period in the dynamic demand market. This study develops solution approaches to solve the problems described above. Numerical analysis discusses the influence of system parameters on the company's decisions and behaviours. The results of this study could serve as a reference for business managers or administrators.
Fire modeling in a nonventilated corridor
NASA Astrophysics Data System (ADS)
Lulea, Marius Dorin; Iordache, Vlad; Năstase, Ilinca
2018-02-01
The main objective of this study was to determine the effect of fire in a nonventilated corridor. A real-scale model of a corridor has been modeled in Fire Dynamics Simulator(F.D.S.) in order to determine the evolution of indoor temperatures, the visibility and the oxygen quantities during a fire. The start time of a sprinkler has also been determined. The use of sprinklers in buildings has become a necessity and a requirement imposed by technical norms. The provision of this type of installation has become a common feature in buildings with a high fire risk, with two main effects: fire extinction and protection of structural and partition elements from high temperatures[
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845
Fan, Sanhong; Hu, Yanan; Li, Chen; Liu, Yanrong
2014-01-01
Protein isolates of pumpkin (Cucurbita pepo L) seeds were hydrolyzed by acid protease to prepare antioxidative peptides. The hydrolysis conditions were optimized through Box-Behnken experimental design combined with response surface method (RSM). The second-order model, developed for the DPPH radical scavenging activity of pumpkin seed hydrolysates, showed good fit with the experiment data with a high value of coefficient of determination (0.9918). The optimal hydrolysis conditions were determined as follows: hydrolyzing temperature 50°C, pH 2.5, enzyme amount 6000 U/g, substrate concentration 0.05 g/ml and hydrolyzing time 5 h. Under the above conditions, the scavenging activity of DPPH radical was as high as 92.82%.
Chen, Yi; Huang, Weina; Peng, Bei
2014-01-01
Because of the demands for sustainable and renewable energy, fuel cells have become increasingly popular, particularly the polymer electrolyte fuel cell (PEFC). Among the various components, the cathode plays a key role in the operation of a PEFC. In this study, a quantitative dual-layer cathode model was proposed for determining the optimal parameters that minimize the over-potential difference η and improve the efficiency using a newly developed bat swarm algorithm with a variable population embedded in the computational intelligence-aided design. The simulation results were in agreement with previously reported results, suggesting that the proposed technique has potential applications for automating and optimizing the design of PEFCs.
Protofit: A program for determining surface protonation constants from titration data
NASA Astrophysics Data System (ADS)
Turner, Benjamin F.; Fein, Jeremy B.
2006-11-01
Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.
NASA Astrophysics Data System (ADS)
Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li
2017-01-01
Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.
NASA Astrophysics Data System (ADS)
Hao, Qichen; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Huang, Linxian
2018-05-01
An optimization approach is used for the operation of groundwater artificial recharge systems in an alluvial fan in Beijing, China. The optimization model incorporates a transient groundwater flow model, which allows for simulation of the groundwater response to artificial recharge. The facilities' operation with regard to recharge rates is formulated as a nonlinear programming problem to maximize the volume of surface water recharged into the aquifers under specific constraints. This optimization problem is solved by the parallel genetic algorithm (PGA) based on OpenMP, which could substantially reduce the computation time. To solve the PGA with constraints, the multiplicative penalty method is applied. In addition, the facilities' locations are implicitly determined on the basis of the results of the recharge-rate optimizations. Two scenarios are optimized and the optimal results indicate that the amount of water recharged into the aquifers will increase without exceeding the upper limits of the groundwater levels. Optimal operation of this artificial recharge system can also contribute to the more effective recovery of the groundwater storage capacity.
NASA Astrophysics Data System (ADS)
Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding
2018-04-01
The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.
Hu, Wenfa; He, Xinhua
2014-01-01
The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated.
Multi-Objective Programming for Lot-Sizing with Quantity Discount
NASA Astrophysics Data System (ADS)
Kang, He-Yau; Lee, Amy H. I.; Lai, Chun-Mei; Kang, Mei-Sung
2011-11-01
Multi-objective programming (MOP) is one of the popular methods for decision making in a complex environment. In a MOP, decision makers try to optimize two or more objectives simultaneously under various constraints. A complete optimal solution seldom exists, and a Pareto-optimal solution is usually used. Some methods, such as the weighting method which assigns priorities to the objectives and sets aspiration levels for the objectives, are used to derive a compromise solution. The ɛ-constraint method is a modified weight method. One of the objective functions is optimized while the other objective functions are treated as constraints and are incorporated in the constraint part of the model. This research considers a stochastic lot-sizing problem with multi-suppliers and quantity discounts. The model is transformed into a mixed integer programming (MIP) model next based on the ɛ-constraint method. An illustrative example is used to illustrate the practicality of the proposed model. The results demonstrate that the model is an effective and accurate tool for determining the replenishment of a manufacturer from multiple suppliers for multi-periods.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kohen, Hamid
1997-01-01
This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.
Optimization of Progressive Freeze Concentration on Apple Juice via Response Surface Methodology
NASA Astrophysics Data System (ADS)
Samsuri, S.; Amran, N. A.; Jusoh, M.
2018-05-01
In this work, a progressive freeze concentration (PFC) system was developed to concentrate apple juice and was optimized by response surface methodology (RSM). The effects of various operating conditions such as coolant temperature, circulation flowrate, circulation time and shaking speed to effective partition constant (K) were investigated. Five different level of central composite design (CCD) was employed to search for optimal concentration of concentrated apple juice. A full quadratic model for K was established by using method of least squares. A coefficient of determination (R2) of this model was found to be 0.7792. The optimum conditions were found to be coolant temperature = -10.59 °C, circulation flowrate = 3030.23 mL/min, circulation time = 67.35 minutes and shaking speed = 30.96 ohm. A validation experiment was performed to evaluate the accuracy of the optimization procedure and the best K value of 0.17 was achieved under the optimized conditions.
NASA Astrophysics Data System (ADS)
Powell, Keith B.; Vaitheeswaran, Vidhya
2010-07-01
The MMT observatory has recently implemented and tested an optimal wavefront controller for the NGS adaptive optics system. Open loop atmospheric data collected at the telescope is used as the input to a MATLAB based analytical model. The model uses nonlinear constrained minimization to determine controller gains and optimize the system performance. The real-time controller performing the adaptive optics close loop operation is implemented on a dedicated high performance PC based quad core server. The controller algorithm is written in C and uses the GNU scientific library for linear algebra. Tests at the MMT confirmed the optimal controller significantly reduced the residual RMS wavefront compared with the previous controller. Significant reductions in image FWHM and increased peak intensities were obtained in J, H and K-bands. The optimal PID controller is now operating as the baseline wavefront controller for the MMT NGS-AO system.
Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov
2014-01-01
Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.
Yuan, Liming; Smith, Alex C
In this study, computational fluid dynamics (CFD) modeling was conducted to optimize gas sampling locations for the early detection of spontaneous heating in longwall gob areas. Initial simulations were carried out to predict carbon monoxide (CO) concentrations at various regulators in the gob using a bleeder ventilation system. Measured CO concentration values at these regulators were then used to calibrate the CFD model. The calibrated CFD model was used to simulate CO concentrations at eight sampling locations in the gob using a bleederless ventilation system to determine the optimal sampling locations for early detection of spontaneous combustion.
NASA Astrophysics Data System (ADS)
Feng, X.; Sheng, Y.; Condon, A. J.; Paramygin, V. A.; Hall, T.
2012-12-01
A cost effective method, JPM-OS (Joint Probability Method with Optimal Sampling), for determining storm response and inundation return frequencies was developed and applied to quantify the hazard of hurricane storm surges and inundation along the Southwest FL,US coast (Condon and Sheng 2012). The JPM-OS uses piecewise multivariate regression splines coupled with dimension adaptive sparse grids to enable the generation of a base flood elevation (BFE) map. Storms are characterized by their landfall characteristics (pressure deficit, radius to maximum winds, forward speed, heading, and landfall location) and a sparse grid algorithm determines the optimal set of storm parameter combinations so that the inundation from any other storm parameter combination can be determined. The end result is a sample of a few hundred (197 for SW FL) optimal storms which are simulated using a dynamically coupled storm surge / wave modeling system CH3D-SSMS (Sheng et al. 2010). The limited historical climatology (1940 - 2009) is explored to develop probabilistic characterizations of the five storm parameters. The probability distributions are discretized and the inundation response of all parameter combinations is determined by the interpolation in five-dimensional space of the optimal storms. The surge response and the associated joint probability of the parameter combination is used to determine the flood elevation with a 1% annual probability of occurrence. The limited historical data constrains the accuracy of the PDFs of the hurricane characteristics, which in turn affect the accuracy of the BFE maps calculated. To offset the deficiency of limited historical dataset, this study presents a different method for producing coastal inundation maps. Instead of using the historical storm data, here we adopt 33,731 tracks that can represent the storm climatology in North Atlantic basin and SW Florida coasts. This large quantity of hurricane tracks is generated from a new statistical model which had been used for Western North Pacific (WNP) tropical cyclone (TC) genesis (Hall 2011) as well as North Atlantic tropical cyclone genesis (Hall and Jewson 2007). The introduction of these tracks complements the shortage of the historical samples and allows for more reliable PDFs required for implementation of JPM-OS. Using the 33,731 tracks and JPM-OS, an optimal storm ensemble is determined. This approach results in different storms/winds for storm surge and inundation modeling, and produces different Base Flood Elevation maps for coastal regions. Coastal inundation maps produced by the two different methods will be discussed in detail in the poster paper.
Evaluation of the chondral modeling theory using fe-simulation and numeric shape optimization
Plochocki, Jeffrey H; Ward, Carol V; Smith, Douglas E
2009-01-01
The chondral modeling theory proposes that hydrostatic pressure within articular cartilage regulates joint size, shape, and congruence through regional variations in rates of tissue proliferation.The purpose of this study is to develop a computational model using a nonlinear two-dimensional finite element analysis in conjunction with numeric shape optimization to evaluate the chondral modeling theory. The model employed in this analysis is generated from an MR image of the medial portion of the tibiofemoral joint in a subadult male. Stress-regulated morphological changes are simulated until skeletal maturity and evaluated against the chondral modeling theory. The computed results are found to support the chondral modeling theory. The shape-optimized model exhibits increased joint congruence, broader stress distributions in articular cartilage, and a relative decrease in joint diameter. The results for the computational model correspond well with experimental data and provide valuable insights into the mechanical determinants of joint growth. The model also provides a crucial first step toward developing a comprehensive model that can be employed to test the influence of mechanical variables on joint conformation. PMID:19438771
NASA Astrophysics Data System (ADS)
Kitagawa, Yuta; Tanabe, Katsuaki
2018-05-01
Mg is promising as a new light-weight and low-cost hydrogen-storage material. We construct a numerical model to represent the hydrogen dynamics on Mg, comprising dissociative adsorption, desorption, bulk diffusion, and chemical reaction. Our calculation shows a good agreement with experimental data for hydrogen absorption and desorption on Mg. Our model clarifies the evolution of the rate-determining processes as absorption and desorption proceed. Furthermore, we investigate the optimal condition and materials design for efficient hydrogen storage in Mg. By properly understanding the rate-determining processes using our model, one can determine the design principle for high-performance hydrogen-storage systems.
A comparison of dynamic and static economic models of uneven-aged stand management
Robert G. Haight
1985-01-01
Numerical techniques have been used to compute the discrete-time sequence of residual diameter distributions that maximize the present net worth (PNW) of harvestable volume from an uneven-aged stand. Results contradicted optimal steady-state diameter distributions determined with static analysis. In this paper, optimality conditions for solutions to dynamic and static...
ERIC Educational Resources Information Center
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D.
2009-01-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response…
Model for Sucker-Rod Pumping Unit Operating Modes Analysis Based on SimMechanics Library
NASA Astrophysics Data System (ADS)
Zyuzev, A. M.; Bubnov, M. V.
2018-01-01
The article provides basic information about the process of a sucker-rod pumping unit (SRPU) model developing by means of SimMechanics library in the MATLAB Simulink environment. The model is designed for the development of a pump productivity optimal management algorithms, sensorless diagnostics of the plunger pump and pumpjack, acquisition of the dynamometer card and determination of a dynamic fluid level in the well, normalization of the faulty unit operation before troubleshooting is performed by staff as well as equilibrium ratio determining by energy indicators and outputting of manual balancing recommendations to achieve optimal power consumption efficiency. Particular attention is given to the application of various blocks from SimMechanics library to take into account the pumpjack construction principal characteristic and to obtain an adequate model. The article explains in depth the developed tools features for collecting and analysis of simulated mechanism data. The conclusions were drawn about practical implementation possibility of the SRPU modelling results and areas for further development of investigation.
Zeković, Zoran; Vladić, Jelena; Vidović, Senka; Adamović, Dušan; Pavlić, Branimir
2016-10-01
Microwave-assisted extraction (MAE) of polyphenols from coriander seeds was optimized by simultaneous maximization of total phenolic (TP) and total flavonoid (TF) yields, as well as maximized antioxidant activity determined by 1,1-diphenyl-2-picrylhydrazyl and reducing power assays. Box-Behnken experimental design with response surface methodology (RSM) was used for optimization of MAE. Extraction time (X1 , 15-35 min), ethanol concentration (X2 , 50-90% w/w) and irradiation power (X3 , 400-800 W) were investigated as independent variables. Experimentally obtained values of investigated responses were fitted to a second-order polynomial model, and multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions. The optimal MAE conditions for simultaneous maximization of polyphenol yield and increased antioxidant activity were an extraction time of 19 min, an ethanol concentration of 63% and an irradiation power of 570 W, while predicted values of TP, TF, IC50 and EC50 at optimal MAE conditions were 311.23 mg gallic acid equivalent per 100 g dry weight (DW), 213.66 mg catechin equivalent per 100 g DW, 0.0315 mg mL(-1) and 0.1311 mg mL(-1) respectively. RSM was successfully used for multi-response optimization of coriander seed polyphenols. Comparison of optimized MAE with conventional extraction techniques confirmed that MAE provides significantly higher polyphenol yields and extracts with increased antioxidant activity. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Optimism predicts positive health in repatriated prisoners of war.
Segovia, Francine; Moore, Jeffrey L; Linnville, Steven E; Hoyt, Robert E
2015-05-01
"Positive health," defined as a state beyond the mere absence of disease, was used as a model to examine factors for enhancing health despite extreme trauma. The study examined the United States' longest detained American prisoners of war, those held in Vietnam in the 1960s through early 1970s. Positive health was measured using a physical and a psychological composite score for each individual, based on 9 physical and 9 psychological variables. Physical and psychological health was correlated with optimism obtained postrepatriation (circa 1973). Linear regressions were employed to determine which variables contributed most to health ratings. Optimism was the strongest predictor of physical health (β = -.33, t = -2.73, p = .008), followed by fewer sleep complaints (β = -.29, t = -2.52, p = .01). This model accounted for 25% of the variance. Optimism was also the strongest predictor of psychological health (β = -.41, t = -2.87, p = .006), followed by Minnesota Multiphasic Personality Inventory-Psychopathic Deviate (MMPI-PD; McKinley & Hathaway, 1944) scores (β = -.23, t = -1.88, p = .07). This model strongly suggests that optimism is a significant predictor of positive physical and psychological health, and optimism also provides long-term protective benefits. These findings and the utility of this model suggest a promising area for future research and intervention. (c) 2015 APA, all rights reserved).
Takahashi, Fumihiro; Morita, Satoshi
2018-02-08
Phase II clinical trials are conducted to determine the optimal dose of the study drug for use in Phase III clinical trials while also balancing efficacy and safety. In conducting these trials, it may be important to consider subpopulations of patients grouped by background factors such as drug metabolism and kidney and liver function. Determining the optimal dose, as well as maximizing the effectiveness of the study drug by analyzing patient subpopulations, requires a complex decision-making process. In extreme cases, drug development has to be terminated due to inadequate efficacy or severe toxicity. Such a decision may be based on a particular subpopulation. We propose a Bayesian utility approach (BUART) to randomized Phase II clinical trials which uses a first-order bivariate normal dynamic linear model for efficacy and safety in order to determine the optimal dose and study population in a subsequent Phase III clinical trial. We carried out a simulation study under a wide range of clinical scenarios to evaluate the performance of the proposed method in comparison with a conventional method separately analyzing efficacy and safety in each patient population. The proposed method showed more favorable operating characteristics in determining the optimal population and dose.
Fetisova, Z G
2004-01-01
In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.
Sitting biomechanics, part II: optimal car driver's seat and optimal driver's spinal model.
Harrison, D D; Harrison, S O; Croft, A C; Harrison, D E; Troyanovich, S J
2000-01-01
Driving has been associated with signs and symptoms caused by vibrations. Sitting causes the pelvis to rotate backwards and the lumbar lordosis to reduce. Lumbar support and armrests reduce disc pressure and electromyographically recorded values. However, the ideal driver's seat and an optimal seated spinal model have not been described. To determine an optimal automobile seat and an ideal spinal model of a driver. Information was obtained from peer-reviewed scientific journals and texts, automotive engineering reports, and the National Library of Medicine. Driving predisposes vehicle operators to low-back pain and degeneration. The optimal seat would have an adjustable seat back incline of 100 degrees from horizontal, a changeable depth of seat back to front edge of seat bottom, adjustable height, an adjustable seat bottom incline, firm (dense) foam in the seat bottom cushion, horizontally and vertically adjustable lumbar support, adjustable bilateral arm rests, adjustable head restraint with lordosis pad, seat shock absorbers to dampen frequencies in the 1 to 20 Hz range, and linear front-back travel of the seat enabling drivers of all sizes to reach the pedals. The lumbar support should be pulsating in depth to reduce static load. The seat back should be damped to reduce rebounding of the torso in rear-end impacts. The optimal driver's spinal model would be the average Harrison model in a 10 degrees posterior inclining seat back angle.
Wong, Karen; Delaney, Geoff P; Barton, Michael B
2016-04-01
The recently updated optimal radiotherapy utilisation model estimated that 48.3% of all cancer patients should receive external beam radiotherapy at least once during their disease course. Adapting this model, we constructed an evidence-based model to estimate the optimal number of fractions for notifiable cancers in Australia to determine equipment and workload implications. The optimal number of fractions was calculated based on the frequency of specific clinical conditions where radiotherapy is indicated and the evidence-based recommended number of fractions for each condition. Sensitivity analysis was performed to assess the impact of variables on the model. Of the 27 cancer sites, the optimal number of fractions for the first course of radiotherapy ranged from 0 to 23.3 per cancer patient, and 1.5 to 29.1 per treatment course. Brain, prostate and head and neck cancers had the highest average number of fractions per course. Overall, the optimal number of fractions was 9.4 per cancer patient (range 8.7-10.0) and 19.4 per course (range 18.0-20.7). These results provide valuable data for radiotherapy services planning and comparison with actual practice. The model can be easily adapted by inserting population-specific epidemiological data thus making it applicable to other jurisdictions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.
Optimal laser wavelength for efficient laser power converter operation over temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Höhn, O., E-mail: oliver.hoehn@ise.fraunhofer.de; Walker, A. W.; Bett, A. W.
2016-06-13
A temperature dependent modeling study is conducted on a GaAs laser power converter to identify the optimal incident laser wavelength for optical power transmission. Furthermore, the respective temperature dependent maximal conversion efficiencies in the radiative limit as well as in a practically achievable limit are presented. The model is based on the transfer matrix method coupled to a two-diode model, and is calibrated to experimental data of a GaAs photovoltaic device over laser irradiance and temperature. Since the laser wavelength does not strongly influence the open circuit voltage of the laser power converter, the optimal laser wavelength is determined tomore » be in the range where the external quantum efficiency is maximal, but weighted by the photon flux of the laser.« less
Composite panel development at JPL
NASA Technical Reports Server (NTRS)
Mcelroy, Paul; Helms, Rich
1988-01-01
Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.
Razavi, Sonia M; Gonzalez, Marcial; Cuitiño, Alberto M
2015-04-30
We propose a general framework for determining optimal relationships for tensile strength of doubly convex tablets under diametrical compression. This approach is based on the observation that tensile strength is directly proportional to the breaking force and inversely proportional to a non-linear function of geometric parameters and materials properties. This generalization reduces to the analytical expression commonly used for flat faced tablets, i.e., Hertz solution, and to the empirical relationship currently used in the pharmaceutical industry for convex-faced tablets, i.e., Pitt's equation. Under proper parametrization, optimal tensile strength relationship can be determined from experimental results by minimizing a figure of merit of choice. This optimization is performed under the first-order approximation that a flat faced tablet and a doubly curved tablet have the same tensile strength if they have the same relative density and are made of the same powder, under equivalent manufacturing conditions. Furthermore, we provide a set of recommendations and best practices for assessing the performance of optimal tensile strength relationships in general. Based on these guidelines, we identify two new models, namely the general and mechanistic models, which are effective and predictive alternatives to the tensile strength relationship currently used in the pharmaceutical industry. Copyright © 2015 Elsevier B.V. All rights reserved.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
Cheng, Xianfu; Lin, Yuqun
2014-01-01
The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Meng, Qing-chun; Rong, Xiao-xia; Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi
2016-01-01
CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996-2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated.
Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi
2016-01-01
CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996–2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated. PMID:27010658
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling
Cuperlovic-Culf, Miroslava
2018-01-01
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.
Cuperlovic-Culf, Miroslava
2018-01-11
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.
Theory of inhomogeneous quantum systems. III. Variational wave functions for Fermi fluids
NASA Astrophysics Data System (ADS)
Krotscheck, E.
1985-04-01
We develop a general variational theory for inhomogeneous Fermi systems such as the electron gas in a metal surface, the surface of liquid 3He, or simple models of heavy nuclei. The ground-state wave function is expressed in terms of two-body correlations, a one-body attenuation factor, and a model-system Slater determinant. Massive partial summations of cluster expansions are performed by means of Born-Green-Yvon and hypernetted-chain techniques. An optimal single-particle basis is generated by a generalized Hartree-Fock equation in which the two-body correlations screen the bare interparticle interaction. The optimization of the pair correlations leads to a state-averaged random-phase-approximation equation and a strictly microscopic determination of the particle-hole interaction.
Microgravity vibration isolation: Optimal preview and feedback control
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Grodsinsky, C. M.; Allaire, P. E.; Lewis, D. W.
1992-01-01
In order to achieve adequate low-frequency vibration isolation for certain space experiments an active control is needed, due to inherent passive-isolator limitations. Proposed here are five possible state-space models for a one-dimensional vibration isolation system with a quadratic performance index. The five models are subsets of a general set of nonhomogeneous state space equations which includes disturbance terms. An optimal control is determined, using a differential equations approach, for this class of problems. This control is expressed in terms of constant, Linear Quadratic Regulator (LQR) feedback gains and constant feedforward (preview) gains. The gains can be easily determined numerically. They result in a robust controller and offers substantial improvements over a control that uses standard LQR feedback alone.
Evaluating the effects of real power losses in optimal power flow based storage integration
Castillo, Anya; Gayme, Dennice
2017-03-27
This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less
NASA Astrophysics Data System (ADS)
Kuhn, A. M.; Fennel, K.; Bianucci, L.
2016-02-01
A key feature of the North Atlantic Ocean's biological dynamics is the annual phytoplankton spring bloom. In the region comprising the continental shelf and adjacent deep ocean of the northwest North Atlantic, we identified two patterns of bloom development: 1) locations with cold temperatures and deep winter mixed layers, where the spring bloom peaks around April and the annual chlorophyll cycle has a large amplitude, and 2) locations with warmer temperatures and shallow winter mixed layers, where the spring bloom peaks earlier in the year, sometimes indiscernible from the fall bloom. These patterns result from a combination of limiting environmental factors and interactions among planktonic groups with different optimal requirements. Simple models that represent the ecosystem with a single phytoplankton (P) and a single zooplankton (Z) group are challenged to reproduce these ecological interactions. Here we investigate the effect that added complexity has on determining spatio-temporal chlorophyll. We compare two ecosystem models, one that contains one P and one Z group, and one with two P and three Z groups. We consider three types of changes in complexity: 1) added dependencies among variables (e.g., temperature dependent rates), 2) modified structural pathways, and 3) added pathways. Subsets of the most sensitive parameters are optimized in each model to replicate observations in the region. For computational efficiency, the parameter optimization is performed using 1D surrogates of a 3D model. We evaluate how model complexity affects model skill, and whether the optimized parameter sets found for each model modify the interpretation of ecosystem functioning. Spatial differences in the parameter sets that best represent different areas hint at the existence of different ecological communities or at physical-biological interactions that are not represented in the simplest model. Our methodology emphasizes the combined use of observations, 1D models to help identifying patterns, and 3D models able to simulate the environment modre realistically, as a means to acquire predictive understanding of the ocean's ecology.
Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2012-12-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.
NASA Astrophysics Data System (ADS)
Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah
2017-09-01
The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.
Behavioral Modeling of Adversaries with Multiple Objectives in Counterterrorism.
Mazicioglu, Dogucan; Merrick, Jason R W
2018-05-01
Attacker/defender models have primarily assumed that each decisionmaker optimizes the cost of the damage inflicted and its economic repercussions from their own perspective. Two streams of recent research have sought to extend such models. One stream suggests that it is more realistic to consider attackers with multiple objectives, but this research has not included the adaption of the terrorist with multiple objectives to defender actions. The other stream builds off experimental studies that show that decisionmakers deviate from optimal rational behavior. In this article, we extend attacker/defender models to incorporate multiple objectives that a terrorist might consider in planning an attack. This includes the tradeoffs that a terrorist might consider and their adaption to defender actions. However, we must also consider experimental evidence of deviations from the rationality assumed in the commonly used expected utility model in determining such adaption. Thus, we model the attacker's behavior using multiattribute prospect theory to account for the attacker's multiple objectives and deviations from rationality. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. We discuss the problems with implementing such an approach, but argue that research in this area must continue to avoid misrepresenting terrorist behavior in determining optimal defensive actions. © 2017 Society for Risk Analysis.
Simulation based analysis of laser beam brazing
NASA Astrophysics Data System (ADS)
Dobler, Michael; Wiethop, Philipp; Schmid, Daniel; Schmidt, Michael
2016-03-01
Laser beam brazing is a well-established joining technology in car body manufacturing with main applications in the joining of divided tailgates and the joining of roof and side panels. A key advantage of laser brazed joints is the seam's visual quality which satisfies highest requirements. However, the laser beam brazing process is very complex and process dynamics are only partially understood. In order to gain deeper knowledge of the laser beam brazing process, to determine optimal process parameters and to test process variants, a transient three-dimensional simulation model of laser beam brazing is developed. This model takes into account energy input, heat transfer as well as fluid and wetting dynamics that lead to the formation of the brazing seam. A validation of the simulation model is performed by metallographic analysis and thermocouple measurements for different parameter sets of the brazing process. These results show that the multi-physical simulation model not only can be used to gain insight into the laser brazing process but also offers the possibility of process optimization in industrial applications. The model's capabilities in determining optimal process parameters are exemplarily shown for the laser power. Small deviations in the energy input can affect the brazing results significantly. Therefore, the simulation model is used to analyze the effect of the lateral laser beam position on the energy input and the resulting brazing seam.
Bidirectional optimization of the melting spinning process.
Liang, Xiao; Ding, Yongsheng; Wang, Zidong; Hao, Kuangrong; Hone, Kate; Wang, Huaping
2014-02-01
A bidirectional optimizing approach for the melting spinning process based on an immune-enhanced neural network is proposed. The proposed bidirectional model can not only reveal the internal nonlinear relationship between the process configuration and the quality indices of the fibers as final product, but also provide a tool for engineers to develop new fiber products with expected quality specifications. A neural network is taken as the basis for the bidirectional model, and an immune component is introduced to enlarge the searching scope of the solution field so that the neural network has a larger possibility to find the appropriate and reasonable solution, and the error of prediction can therefore be eliminated. The proposed intelligent model can also help to determine what kind of process configuration should be made in order to produce satisfactory fiber products. To make the proposed model practical to the manufacturing, a software platform is developed. Simulation results show that the proposed model can eliminate the approximation error raised by the neural network-based optimizing model, which is due to the extension of focusing scope by the artificial immune mechanism. Meanwhile, the proposed model with the corresponding software can conduct optimization in two directions, namely, the process optimization and category development, and the corresponding results outperform those with an ordinary neural network-based intelligent model. It is also proved that the proposed model has the potential to act as a valuable tool from which the engineers and decision makers of the spinning process could benefit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starling, K.E.; Mallinson, R.G.; Li, M.H.
The objective of this research is to examine the relationship between the calorimetric properties of coal fluids and their molecular functional group composition. Coal fluid samples which have had their calorimetric properties measured are characterized using proton NMR, IR, and elemental analysis. These characterizations are then used in a chemical structural model to determine the composition of the coal fluid in terms of the important molecular functional groups. These functional groups are particularly important in determining the intramolecular based properties of a fluid, such as ideal gas heat capacities. Correlational frameworks for ideal gas heat capacities are then examined withinmore » an existing equation of state methodology to determine an optimal correlation. The optimal correlation for obtaining the characterization/chemical structure information and the sensitivity of the correlation to the characterization and structural model is examined. 8 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starling, K.E.; Mallinson, R.G.; Li, M.H.
The objective of this research is to examine the relationship between the calorimetric properties of coal fluids and their molecular functional group composition. Coal fluid samples which have had their calorimetric properties measured are characterized using proton NMR, ir, and elemental analysis. These characterizations are then used in a chemical structural model to determine the composition of the coal fluid in terms of the important molecular functional groups. These functional groups are particularly important in determining the intramolecular based properties of a fluid, such as ideal gas heat capacities. Correlational frameworks for ideal gas heat capacities are then examined withinmore » an existing equation of state methodology to determine an optimal correlation. The optimal correlation for obtaining the characterization/chemical structure information and the sensitivity of the correlation to the characterization and structural model is examined.« less
Apparatus and method for controlling autotroph cultivation
Fuxman, Adrian M; Tixier, Sebastien; Stewart, Gregory E; Haran, Frank M; Backstrom, Johan U; Gerbrandt, Kelsey
2013-07-02
A method includes receiving at least one measurement of a dissolved carbon dioxide concentration of a mixture of fluid containing an autotrophic organism. The method also includes determining an adjustment to one or more manipulated variables using the at least one measurement. The method further includes generating one or more signals to modify the one or more manipulated variables based on the determined adjustment. The one or more manipulated variables could include a carbon dioxide flow rate, an air flow rate, a water temperature, and an agitation level for the mixture. At least one model relates the dissolved carbon dioxide concentration to one or more manipulated variables, and the adjustment could be determined by using the at least one model to drive the dissolved carbon dioxide concentration to at least one target that optimize a goal function. The goal function could be to optimize biomass growth rate, nutrient removal and/or lipid production.
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
NASA Astrophysics Data System (ADS)
Li, J. C.; Gong, B.; Wang, H. G.
2016-08-01
Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.
Alternatives for jet engine control
NASA Technical Reports Server (NTRS)
Leake, R. J.; Sain, M. K.
1978-01-01
General goals of the research were classified into two categories. The first category involves the use of modern multivariable frequency domain methods for control of engine models in the neighborhood of a quiescent point. The second category involves the use of nonlinear modelling and optimization techniques for control of engine models over a more extensive part of the flight envelope. In the frequency domain category, works were published in the areas of low-interaction design, polynomial design, and multiple setpoint studies. A number of these ideas progressed to the point at which they are starting to attract practical interest. In the nonlinear category, advances were made both in engine modelling and in the details associated with software for determination of time optimal controls. Nonlinear models for a two spool turbofan engine were expanded and refined; and a promising new approach to automatic model generation was placed under study. A two time scale scheme was developed to do two-dimensional dynamic programming, and an outward spiral sweep technique has greatly speeded convergence times in time optimal calculations.
A search game model of the scatter hoarder's problem
Alpern, Steve; Fokkink, Robbert; Lidbetter, Thomas; Clayton, Nicola S.
2012-01-01
Scatter hoarders are animals (e.g. squirrels) who cache food (nuts) over a number of sites for later collection. A certain minimum amount of food must be recovered, possibly after pilfering by another animal, in order to survive the winter. An optimal caching strategy is one that maximizes the survival probability, given worst case behaviour of the pilferer. We modify certain ‘accumulation games’ studied by Kikuta & Ruckle (2000 J. Optim. Theory Appl.) and Kikuta & Ruckle (2001 Naval Res. Logist.), which modelled the problem of optimal diversification of resources against catastrophic loss, to include the depth at which the food is hidden at each caching site. Optimal caching strategies can then be determined as equilibria in a new ‘caching game’. We show how the distribution of food over sites and the site-depths of the optimal caching varies with the animal's survival requirements and the amount of pilfering. We show that in some cases, ‘decoy nuts’ are required to be placed above other nuts that are buried further down at the same site. Methods from the field of search games are used. Some empirically observed behaviour can be shown to be optimal in our model. PMID:22012971
Multi-objective optimization to predict muscle tensions in a pinch function using genetic algorithm
NASA Astrophysics Data System (ADS)
Bensghaier, Amani; Romdhane, Lotfi; Benouezdou, Fethi
2012-03-01
This work is focused on the determination of the thumb and the index finger muscle tensions in a tip pinch task. A biomechanical model of the musculoskeletal system of the thumb and the index finger is developed. Due to the assumptions made in carrying out the biomechanical model, the formulated force analysis problem is indeterminate leading to an infinite number of solutions. Thus, constrained single and multi-objective optimization methodologies are used in order to explore the muscular redundancy and to predict optimal muscle tension distributions. Various models are investigated using the optimization process. The basic criteria to minimize are the sum of the muscle stresses, the sum of individual muscle tensions and the maximum muscle stress. The multi-objective optimization is solved using a Pareto genetic algorithm to obtain non-dominated solutions, defined as the set of optimal distributions of muscle tensions. The results show the advantage of the multi-objective formulation over the single objective one. The obtained solutions are compared to those available in the literature demonstrating the effectiveness of our approach in the analysis of the fingers musculoskeletal systems when predicting muscle tensions.
A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles
NASA Astrophysics Data System (ADS)
Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.
The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.
A Goal Programming Optimization Model for The Allocation of Liquid Steel Production
NASA Astrophysics Data System (ADS)
Hapsari, S. N.; Rosyidi, C. N.
2018-03-01
This research was conducted in one of the largest steel companies in Indonesia which has several production units and produces a wide range of steel products. One of the important products in the company is billet steel. The company has four Electric Arc Furnace (EAF) which produces liquid steel which must be procesed further to be billet steel. The billet steel plant needs to make their production process more efficient to increase the productvity. The management has four goals to be achieved and hence the optimal allocation of the liquid steel production is needed to achieve those goals. In this paper, a goal programming optimization model is developed to determine optimal allocation of liquid steel production in each EAF, to satisfy demand in 3 periods and the company goals, namely maximizing the volume of production, minimizing the cost of raw materials, minimizing maintenance costs, maximizing sales revenues, and maximizing production capacity. From the results of optimization, only maximizing production capacity goal can not achieve the target. However, the model developed in this papare can optimally allocate liquid steel so the allocation of production does not exceed the maximum capacity of the machine work hours and maximum production capacity.
LED light design method for high contrast and uniform illumination imaging in machine vision.
Wu, Xiaojun; Gao, Guangming
2018-03-01
In machine vision, illumination is very critical to determine the complexity of the inspection algorithms. Proper lights can obtain clear and sharp images with the highest contrast and low noise between the interested object and the background, which is conducive to the target being located, measured, or inspected. Contrary to the empirically based trial-and-error convention to select the off-the-shelf LED light in machine vision, an optimization algorithm for LED light design is proposed in this paper. It is composed of the contrast optimization modeling and the uniform illumination technology for non-normal incidence (UINI). The contrast optimization model is built based on the surface reflection characteristics, e.g., the roughness, the reflective index, and light direction, etc., to maximize the contrast between the features of interest and the background. The UINI can keep the uniformity of the optimized lighting by the contrast optimization model. The simulation and experimental results demonstrate that the optimization algorithm is effective and suitable to produce images with the highest contrast and uniformity, which is very inspirational to the design of LED illumination systems in machine vision.
Two retailer-supplier supply chain models with default risk under trade credit policy.
Wu, Chengfeng; Zhao, Qiuhong
2016-01-01
The purpose of the paper is to formulate two uncooperative replenishment models with demand and default risk which are the functions of the trade credit period, i.e., a Nash equilibrium model and a supplier-Stackelberg model. Firstly, we present the optimal results of decentralized decision and centralized decision without trade credit. Secondly, we derive the existence and uniqueness conditions of the optimal solutions under the two games, respectively. Moreover, we present a set of theorems and corollary to determine the optimal solutions. Finally, we provide an example and sensitivity analysis to illustrate the proposed strategy and optimal solutions. Sensitivity analysis reveals that the total profits of supply chain under the two games both are better than the results under the centralized decision only if the optimal trade credit period isn't too short. It also reveals that the size of trade credit period, demand, retailer's profit and supplier's profit have strong relationship with the increasing demand coefficient, wholesale price, default risk coefficient and production cost. The major contribution of the paper is that we comprehensively compare between the results of decentralized decision and centralized decision without trade credit, Nash equilibrium and supplier-Stackelberg models with trade credit, and obtain some interesting managerial insights and practical implications.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer. PMID:28103246
An Energy Integrated Dispatching Strategy of Multi- energy Based on Energy Internet
NASA Astrophysics Data System (ADS)
Jin, Weixia; Han, Jun
2018-01-01
Energy internet is a new way of energy use. Energy internet achieves energy efficiency and low cost by scheduling a variety of different forms of energy. Particle Swarm Optimization (PSO) is an advanced algorithm with few parameters, high computational precision and fast convergence speed. By improving the parameters ω, c1 and c2, PSO can improve the convergence speed and calculation accuracy. The objective of optimizing model is lowest cost of fuel, which can meet the load of electricity, heat and cold after all the renewable energy is received. Due to the different energy structure and price in different regions, the optimization strategy needs to be determined according to the algorithm and model.
Properties of nucleon resonances by means of a genetic algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez-Ramirez, C.; Moya de Guerra, E.; Instituto de Estructura de la Materia, CSIC, Serrano 123, E-28006 Madrid
2008-06-15
We present an optimization scheme that employs a genetic algorithm (GA) to determine the properties of low-lying nucleon excitations within a realistic photo-pion production model based upon an effective Lagrangian. We show that with this modern optimization technique it is possible to reliably assess the parameters of the resonances and the associated error bars as well as to identify weaknesses in the models. To illustrate the problems the optimization process may encounter, we provide results obtained for the nucleon resonances {delta}(1230) and {delta}(1700). The former can be easily isolated and thus has been studied in depth, while the latter ismore » not as well known experimentally.« less
A Framework for Modeling Emerging Diseases to Inform Management
Katz, Rachel A.; Richgels, Katherine L.D.; Walsh, Daniel P.; Grant, Evan H.C.
2017-01-01
The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge. PMID:27983501
A multimodal logistics service network design with time windows and environmental concerns
Zhang, Dezhi; He, Runzhong; Wang, Zhongwei
2017-01-01
The design of a multimodal logistics service network with customer service time windows and environmental costs is an important and challenging issue. Accordingly, this work established a model to minimize the total cost of multimodal logistics service network design with time windows and environmental concerns. The proposed model incorporates CO2 emission costs to determine the optimal transportation mode combinations and investment selections for transfer nodes, which consider transport cost, transport time, carbon emission, and logistics service time window constraints. Furthermore, genetic and heuristic algorithms are proposed to set up the abovementioned optimal model. A numerical example is provided to validate the model and the abovementioned two algorithms. Then, comparisons of the performance of the two algorithms are provided. Finally, this work investigates the effects of the logistics service time windows and CO2 emission taxes on the optimal solution. Several important management insights are obtained. PMID:28934272
Research on Bidding Decision-making of International Public-Private Partnership Projects
NASA Astrophysics Data System (ADS)
Hu, Zhen Yu; Zhang, Shui Bo; Liu, Xin Yan
2018-06-01
In order to select the optimal quasi-bidding project for an investment enterprise, a bidding decision-making model for international PPP projects was established in this paper. Firstly, the literature frequency statistics method was adopted to screen out the bidding decision-making indexes, and accordingly the bidding decision-making index system for international PPP projects was constructed. Then, the group decision-making characteristic root method, the entropy weight method, and the optimization model based on least square method were used to set the decision-making index weights. The optimal quasi-bidding project was thus determined by calculating the consistent effect measure of each decision-making index value and the comprehensive effect measure of each quasi-bidding project. Finally, the bidding decision-making model for international PPP projects was further illustrated by a hypothetical case. This model can effectively serve as a theoretical foundation and technical support for the bidding decision-making of international PPP projects.
A Framework for Modeling Emerging Diseases to Inform Management.
Russell, Robin E; Katz, Rachel A; Richgels, Katherine L D; Walsh, Daniel P; Grant, Evan H C
2017-01-01
The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.
A multimodal logistics service network design with time windows and environmental concerns.
Zhang, Dezhi; He, Runzhong; Li, Shuangyan; Wang, Zhongwei
2017-01-01
The design of a multimodal logistics service network with customer service time windows and environmental costs is an important and challenging issue. Accordingly, this work established a model to minimize the total cost of multimodal logistics service network design with time windows and environmental concerns. The proposed model incorporates CO2 emission costs to determine the optimal transportation mode combinations and investment selections for transfer nodes, which consider transport cost, transport time, carbon emission, and logistics service time window constraints. Furthermore, genetic and heuristic algorithms are proposed to set up the abovementioned optimal model. A numerical example is provided to validate the model and the abovementioned two algorithms. Then, comparisons of the performance of the two algorithms are provided. Finally, this work investigates the effects of the logistics service time windows and CO2 emission taxes on the optimal solution. Several important management insights are obtained.
A framework for modeling emerging diseases to inform management
Russell, Robin E.; Katz, Rachel A.; Richgels, Katherine L. D.; Walsh, Daniel P.; Grant, Evan H. Campbell
2017-01-01
The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boutilier, J; Chan, T; Lee, T
2014-06-15
Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the leftmore » femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.« less
Toward a preoperative planning tool for brain tumor resection therapies.
Coffey, Aaron M; Miga, Michael I; Chen, Ishita; Thompson, Reid C
2013-01-01
Neurosurgical procedures involving tumor resection require surgical planning such that the surgical path to the tumor is determined to minimize the impact on healthy tissue and brain function. This work demonstrates a predictive tool to aid neurosurgeons in planning tumor resection therapies by finding an optimal model-selected patient orientation that minimizes lateral brain shift in the field of view. Such orientations may facilitate tumor access and removal, possibly reduce the need for retraction, and could minimize the impact of brain shift on image-guided procedures. In this study, preoperative magnetic resonance images were utilized in conjunction with pre- and post-resection laser range scans of the craniotomy and cortical surface to produce patient-specific finite element models of intraoperative shift for 6 cases. These cases were used to calibrate a model (i.e., provide general rules for the application of patient positioning parameters) as well as determine the current model-based framework predictive capabilities. Finally, an objective function is proposed that minimizes shift subject to patient position parameters. Patient positioning parameters were then optimized and compared to our neurosurgeon as a preliminary study. The proposed model-driven brain shift minimization objective function suggests an overall reduction of brain shift by 23 % over experiential methods. This work recasts surgical simulation from a trial-and-error process to one where options are presented to the surgeon arising from an optimization of surgical goals. To our knowledge, this is the first realization of an evaluative tool for surgical planning that attempts to optimize surgical approach by means of shift minimization in this manner.
2018-01-01
Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization. PMID:29377956
Optimal Concentrations in Transport Networks
NASA Astrophysics Data System (ADS)
Jensen, Kaare; Savage, Jessica; Kim, Wonjung; Bush, John; Holbrook, N. Michele
2013-03-01
Biological and man-made systems rely on effective transport networks for distribution of material and energy. Mass flow in these networks is determined by the flow rate and the concentration of material. While the most concentrated solution offers the greatest potential for mass flow, impedance grows with concentration and thus makes it the most difficult to transport. The concentration at which mass flow is optimal depends on specific physical and physiological properties of the system. We derive a simple model which is able to predict optimal concentrations observed in blood flows, sugar transport in plants, and nectar feeding animals. Our model predicts that the viscosity at the optimal concentration μopt =2nμ0 is an integer power of two times the viscosity of the pure carrier medium μ0. We show how the observed powers 1 <= n <= 6 agree well with theory and discuss how n depends on biological constraints imposed on the transport process. The model provides a universal framework for studying flows impeded by concentration and provides hints of how to optimize engineered flow systems, such as congestion in traffic flows.
Zu, Xianghuan; Yang, Chuanlei; Wang, Hechun; Wang, Yinyan
2018-01-01
Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization.
Launch Vehicle Propulsion Design with Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.
2005-01-01
The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.
Box-Behnken design for investigation of microwave-assisted extraction of patchouli oil
NASA Astrophysics Data System (ADS)
Kusuma, Heri Septya; Mahfud, Mahfud
2015-12-01
Microwave-assisted extraction (MAE) technique was employed to extract the essential oil from patchouli (Pogostemon cablin). The optimal conditions for microwave-assisted extraction of patchouli oil were determined by response surface methodology. A Box-Behnken design (BBD) was applied to evaluate the effects of three independent variables (microwave power (A: 400-800 W), plant material to solvent ratio (B: 0.10-0.20 g mL-1) and extraction time (C: 20-60 min)) on the extraction yield of patchouli oil. The correlation analysis of the mathematical-regression model indicated that quadratic polynomial model could be employed to optimize the microwave extraction of patchouli oil. The optimal extraction conditions of patchouli oil was microwave power 634.024 W, plant material to solvent ratio 0.147648 g ml-1 and extraction time 51.6174 min. The maximum patchouli oil yield was 2.80516% under these optimal conditions. Under the extraction condition, the experimental values agreed with the predicted results by analysis of variance. It indicated high fitness of the model used and the success of response surface methodology for optimizing and reflect the expected extraction condition.
Rantner, Lukas J; Vadakkumpadan, Fijoy; Spevak, Philip J; Crosson, Jane E; Trayanova, Natalia A
2013-01-01
There is currently no reliable way of predicting the optimal implantable cardioverter-defibrillator (ICD) placement in paediatric and congenital heart defect (CHD) patients. This study aimed to: (1) develop a new image processing pipeline for constructing patient-specific heart–torso models from clinical magnetic resonance images (MRIs); (2) use the pipeline to determine the optimal ICD configuration in a paediatric tricuspid valve atresia patient; (3) establish whether the widely used criterion of shock-induced extracellular potential (Φe) gradients ≥5 V cm−1 in ≥95% of ventricular volume predicts defibrillation success. A biophysically detailed heart–torso model was generated from patient MRIs. Because transvenous access was impossible, three subcutaneous and three epicardial lead placement sites were identified along with five ICD scan locations. Ventricular fibrillation was induced, and defibrillation shocks were applied from 11 ICD configurations to determine defibrillation thresholds (DFTs). Two configurations with epicardial leads resulted in the lowest DFTs overall and were thus considered optimal. Three configurations shared the lowest DFT among subcutaneous lead ICDs. The Φe gradient criterion was an inadequate predictor of defibrillation success, as defibrillation failed in numerous instances even when 100% of the myocardium experienced such gradients. In conclusion, we have developed a new image processing pipeline and applied it to a CHD patient to construct the first active heart–torso model from clinical MRIs. PMID:23798492
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
A Boussinesq-scaled, pressure-Poisson water wave model
NASA Astrophysics Data System (ADS)
Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint
2015-02-01
Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.
Efficiency bounds of molecular motors under a trade-off figure of merit
NASA Astrophysics Data System (ADS)
Zhang, Yanchao; Huang, Chuankun; Lin, Guoxing; Chen, Jincan
2017-05-01
On the basis of the theory of irreversible thermodynamics and an elementary model of the molecular motors converting chemical energy by ATP hydrolysis to mechanical work exerted against an external force, the efficiencies of the molecular motors at two different optimization configurations for trade-off figure of merit representing a best compromise between the useful energy and the lost energy are calculated. The upper and lower bounds for the efficiency at two different optimization configurations are determined. It is found that the optimal efficiencies at the two different optimization configurations are always larger than 1 / 2.
Fan, Sanhong; Hu, Yanan; Li, Chen; Liu, Yanrong
2014-01-01
Protein isolates of pumpkin (Cucurbita pepo L) seeds were hydrolyzed by acid protease to prepare antioxidative peptides. The hydrolysis conditions were optimized through Box-Behnken experimental design combined with response surface method (RSM). The second-order model, developed for the DPPH radical scavenging activity of pumpkin seed hydrolysates, showed good fit with the experiment data with a high value of coefficient of determination (0.9918). The optimal hydrolysis conditions were determined as follows: hydrolyzing temperature 50°C, pH 2.5, enzyme amount 6000 U/g, substrate concentration 0.05 g/ml and hydrolyzing time 5 h. Under the above conditions, the scavenging activity of DPPH radical was as high as 92.82%. PMID:24637721
Optimal systems of geoscience surveying A preliminary discussion
NASA Astrophysics Data System (ADS)
Shoji, Tetsuya
2006-10-01
In any geoscience survey, each survey technique must be effectively applied, and many techniques are often combined optimally. An important task is to get necessary and sufficient information to meet the requirement of the survey. A prize-penalty function quantifies effectiveness of the survey, and hence can be used to determine the best survey technique. On the other hand, an information-cost function can be used to determine the optimal combination of survey techniques on the basis of the geoinformation obtained. Entropy is available to evaluate geoinformation. A simple model suggests the possibility that low-resolvability techniques are generally applied at early stages of survey, and that higher-resolvability techniques should alternate with lower-resolvability ones with the progress of the survey.
Chen, Yi; Huang, Weina; Peng, Bei
2014-01-01
Because of the demands for sustainable and renewable energy, fuel cells have become increasingly popular, particularly the polymer electrolyte fuel cell (PEFC). Among the various components, the cathode plays a key role in the operation of a PEFC. In this study, a quantitative dual-layer cathode model was proposed for determining the optimal parameters that minimize the over-potential difference and improve the efficiency using a newly developed bat swarm algorithm with a variable population embedded in the computational intelligence-aided design. The simulation results were in agreement with previously reported results, suggesting that the proposed technique has potential applications for automating and optimizing the design of PEFCs. PMID:25490761
Heliostat cost optimization study
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus
2016-05-01
This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.
The Mathematics of Navigating the Solar System
NASA Technical Reports Server (NTRS)
Hintz, Gerald
2000-01-01
In navigating spacecraft throughout the solar system, the space navigator relies on three academic disciplines - optimization, estimation, and control - that work on mathematical models of the real world. Thus, the navigator determines the flight path that will consume propellant and other resources in an efficient manner, determines where the craft is and predicts where it will go, and transfers it onto the optimal trajectory that meets operational and mission constraints. Mission requirements, for example, demand that observational measurements be made with sufficient precision that relativity must be modeled in collecting and fitting (the estimation process) the data, and propagating the trajectory. Thousands of parameters are now determined in near real-time to model the gravitational forces acting on a spacecraft in the vicinity of an irregularly shaped body. Completing these tasks requires mathematical models, analyses, and processing techniques. Newton, Gauss, Lambert, Legendre, and others are justly famous for their contributions to the mathematics of these tasks. More recently, graduate students participated in research to update the gravity model of the Saturnian system, including higher order gravity harmonics, tidal effects, and the influence of the rings. This investigation was conducted for the Cassini project to incorporate new trajectory modeling features in the navigation software. The resulting trajectory model will be used in navigating the 4-year tour of the Saturnian satellites. Also, undergraduate students are determining the ephemerides (locations versus time) of asteroids that will be used as reference objects in navigating the New Millennium's Deep Space 1 spacecraft autonomously.
Optimal Level of Expenditure to Control the Southern Pine Beetle
Joseph E. de Steiguer; Roy L. Hedden; John M. Pye
1987-01-01
Optimal level of expenditure to control damage to commercial timber stands by the southern pine beetle was determined by models that simulated and analyzed beetle attacks during a typical season for 11 Southern States. At a real discount rate of 4 percent, maximized net benefits for the Southern region are estimated at about $50 million; at 10 percent, more than $30...
NASA Astrophysics Data System (ADS)
Almukhametova, E. M.; Gizetdinov, I. A.
2018-05-01
Development of most deposits in Russia is accompanied with a high level of crude water cut. More than 70% of the operating well count of Barsukovskoye deposit operates with water; about 12% of the wells are characterized by a saturated water cut; many wells with high water cut are idling. To optimize the current FPM system of the Barsukovskoye deposit, a calculation method over a hydrodynamic model was applied with further analysis of hydrodynamic connectivity between the wells. A plot was selected, containing several wells with water cut going ahead of reserve recovery rate; injection wells, exerting the most influence onto the selected producer wells, were determined. Then, several variants were considered for transformation of the FPM system of this plot. The possible cases were analyzed with the hydrodynamic model with further determination of economic effect of each of them.
An interval programming model for continuous improvement in micro-manufacturing
NASA Astrophysics Data System (ADS)
Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun
2018-03-01
Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.
NASA Astrophysics Data System (ADS)
Tofighi, Elham; Mahdizadeh, Amin
2016-09-01
This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.
Troncossi, Marco; Borghi, Corrado; Chiossi, Marco; Davalli, Angelo; Parenti-Castelli, Vincenzo
2009-05-01
The application of a design methodology for the determination of the optimal prosthesis architecture for a given upper limb amputee is presented in this paper along with the discussion of its results. In particular, a novel procedure was used to provide the main guidelines for the design of an actuated shoulder articulation for externally powered prostheses. The topology and the geometry of the new articulation were determined as the optimal compromise between wearability (for the ease of use and the patient's comfort) and functionality of the device (in terms of mobility, velocity, payload, etc.). This choice was based on kinematic and kinetostatic analyses of different upper limb prosthesis models and on purpose-built indices that were set up to evaluate the models from different viewpoints. Only 12 of the 31 simulated prostheses proved a sufficient level of functionality: among these, the optimal solution was an articulation having two actuated revolute joints with orthogonal axes for the elevation of the upper arm in any vertical plane and a frictional joint for the passive adjustment of the humeral intra-extra rotation. A prototype of the mechanism is at the clinical test stage.
NASA Astrophysics Data System (ADS)
Saouane, I.; Chaker, A.; Zaidi, B.; Shekhar, C.
2017-03-01
This paper describes the mathematical model used to determine the amount of solar radiation received on an inclined solar photovoltaic panel. The optimum slope angles for each month, season, and year have also been calculated for a solar photovoltaic panel. The optimization of the procedure to maximize the solar energy collected by the solar panel by varying the tilt angle is also presented. As a first step, the global solar radiation on the horizontal surface of a thermal photovoltaic panel during clear sky is estimated. Thereafter, the Muneer model, which provides the most accurate estimation of the total solar radiation at a given geographical point has been used to determine the optimum collector slope. Also, the Ant Colony Optimization (ACO) algorithm was applied to obtain the optimum tilt angle settings for PV collector to improve the PV collector efficiency. The results show good agreement between calculated and predicted results. Additionally, this paper presents studies carried out on the polycrystalline silicon solar panels for electrical energy generation in the city of Ghardaia. The electrical energy generation has been studied as a function of amount of irradiation received and the angle of optimum orientation of the solar panels.
Integrating NOE and RDC using sum-of-squares relaxation for protein structure determination.
Khoo, Y; Singer, A; Cowburn, D
2017-07-01
We revisit the problem of protein structure determination from geometrical restraints from NMR, using convex optimization. It is well-known that the NP-hard distance geometry problem of determining atomic positions from pairwise distance restraints can be relaxed into a convex semidefinite program (SDP). However, often the NOE distance restraints are too imprecise and sparse for accurate structure determination. Residual dipolar coupling (RDC) measurements provide additional geometric information on the angles between atom-pair directions and axes of the principal-axis-frame. The optimization problem involving RDC is highly non-convex and requires a good initialization even within the simulated annealing framework. In this paper, we model the protein backbone as an articulated structure composed of rigid units. Determining the rotation of each rigid unit gives the full protein structure. We propose solving the non-convex optimization problems using the sum-of-squares (SOS) hierarchy, a hierarchy of convex relaxations with increasing complexity and approximation power. Unlike classical global optimization approaches, SOS optimization returns a certificate of optimality if the global optimum is found. Based on the SOS method, we proposed two algorithms-RDC-SOS and RDC-NOE-SOS, that have polynomial time complexity in the number of amino-acid residues and run efficiently on a standard desktop. In many instances, the proposed methods exactly recover the solution to the original non-convex optimization problem. To the best of our knowledge this is the first time SOS relaxation is introduced to solve non-convex optimization problems in structural biology. We further introduce a statistical tool, the Cramér-Rao bound (CRB), to provide an information theoretic bound on the highest resolution one can hope to achieve when determining protein structure from noisy measurements using any unbiased estimator. Our simulation results show that when the RDC measurements are corrupted by Gaussian noise of realistic variance, both SOS based algorithms attain the CRB. We successfully apply our method in a divide-and-conquer fashion to determine the structure of ubiquitin from experimental NOE and RDC measurements obtained in two alignment media, achieving more accurate and faster reconstructions compared to the current state of the art.
Ares I Scale Model Acoustic Test Above Deck Water Sound Suppression Results
NASA Technical Reports Server (NTRS)
Counter, Douglas D.; Houston, Janice D.
2011-01-01
The Ares I Scale Model Acoustic Test (ASMAT) program test matrix was designed to determine the acoustic reduction for the Liftoff acoustics (LOA) environment with an above deck water sound suppression system. The scale model test can be used to quantify the effectiveness of the water suppression system as well as optimize the systems necessary for the LOA noise reduction. Several water flow rates were tested to determine which rate provides the greatest acoustic reductions. Preliminary results are presented.
Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm
NASA Astrophysics Data System (ADS)
Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi
2017-11-01
In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008
Prakash Vincent, Samuel Gnana
2014-01-01
Production of fibrinolytic enzyme by a newly isolated Paenibacillus sp. IND8 was optimized using wheat bran in solid state fermentation. A 25 full factorial design (first-order model) was applied to elucidate the key factors as moisture, pH, sucrose, yeast extract, and sodium dihydrogen phosphate. Statistical analysis of the results has shown that moisture, sucrose, and sodium dihydrogen phosphate have the most significant effects on fibrinolytic enzymes production (P < 0.05). Central composite design (CCD) was used to determine the optimal concentrations of these three components and the experimental results were fitted with a second-order polynomial model at 95% level (P < 0.05). Overall, 4.5-fold increase in fibrinolytic enzyme production was achieved in the optimized medium as compared with the unoptimized medium. PMID:24523635
Nazir, Sadaf; Wani, Idrees Ahmed; Masoodi, Farooq Ahmad
2017-05-01
Aqueous extraction of basil seed mucilage was optimized using response surface methodology. A Central Composite Rotatable Design (CCRD) for modeling of three independent variables: temperature (40-91 °C); extraction time (1.6-3.3 h) and water/seed ratio (18:1-77:1) was used to study the response for yield. Experimental values for extraction yield ranged from 7.86 to 20.5 g/100 g. Extraction yield was significantly ( P < 0.05) affected by all the variables. Temperature and water/seed ratio were found to have pronounced effect while the extraction time was found to have minor possible effects. Graphical optimization determined the optimal conditions for the extraction of mucilage. The optimal condition predicted an extraction yield of 20.49 g/100 g at 56.7 °C, 1.6 h, and a water/seed ratio of 66.84:1. Optimal conditions were determined to obtain highest extraction yield. Results indicated that water/seed ratio was the most significant parameter, followed by temperature and time.
PubChem3D: Conformer generation
2011-01-01
Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340
Mechanical design optimization of bioabsorbable fixation devices for bone fractures.
Lovald, Scott T; Khraishi, Tariq; Wagner, Jon; Baack, Bret
2009-03-01
Bioabsorbable bone plates can eliminate the necessity for a permanent implant when used to fixate fractures of the human mandible. They are currently not in widespread use because of the low strength of the materials and the requisite large volume of the resulting bone plate. The aim of the current study was to discover a minimally invasive bioabsorbable bone plate design that can provide the same mechanical stability as a standard titanium bone plate. A finite element model of a mandible with a fracture in the body region is subjected to bite loads that are common to patients postsurgery. The model is used first to determine benchmark stress and strain values for a titanium plate. These values are then set as the limits within which the bioabsorbable bone plate must comply. The model is then modified to consider a bone plate made of the polymer poly-L/DL-lactide 70/30. An optimization routine is run to determine the smallest volume of bioabsorbable bone plate that can perform and a titanium bone plate when fixating fractures of this considered type. Two design parameters are varied for the bone plate design during the optimization analysis. The analysis determined that a strut style poly-L-lactide-co-DL-lactide plate of 690 mm2 can provide as much mechanical stability as a similar titanium design structure of 172 mm2. The model has determined a bioabsorbable bone plate design that is as strong as a titanium plate when fixating fractures of the load-bearing mandible. This is an intriguing outcome, considering that the polymer material has only 6% of the stiffness of titanium.
Modelling and Optimal Control of Typhoid Fever Disease with Cost-Effective Strategies.
Tilahun, Getachew Teshome; Makinde, Oluwole Daniel; Malonza, David
2017-01-01
We propose and analyze a compartmental nonlinear deterministic mathematical model for the typhoid fever outbreak and optimal control strategies in a community with varying population. The model is studied qualitatively using stability theory of differential equations and the basic reproductive number that represents the epidemic indicator is obtained from the largest eigenvalue of the next-generation matrix. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined. The model exhibits a forward transcritical bifurcation and the sensitivity analysis is performed. The optimal control problem is designed by applying Pontryagin maximum principle with three control strategies, namely, the prevention strategy through sanitation, proper hygiene, and vaccination; the treatment strategy through application of appropriate medicine; and the screening of the carriers. The cost functional accounts for the cost involved in prevention, screening, and treatment together with the total number of the infected persons averted. Numerical results for the typhoid outbreak dynamics and its optimal control revealed that a combination of prevention and treatment is the best cost-effective strategy to eradicate the disease.
New optimization model for routing and spectrum assignment with nodes insecurity
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-04-01
By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.
Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts
NASA Astrophysics Data System (ADS)
Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo
This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-17
... biodiesel and renewable diesel. Regulated categories include: NAICS \\1\\ Examples of potentially regulated... and Research Institute international models as maintained by the Center for Agricultural and Rural... in the model results. For example, since FASOM is a long-term dynamic optimization model, short-term...
Optimizing Blasting’s Air Overpressure Prediction Model using Swarm Intelligence
NASA Astrophysics Data System (ADS)
Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd
2018-04-01
Air overpressure (AOp) resulting from blasting can cause damage and nuisance to nearby civilians. Thus, it is important to be able to predict AOp accurately. In this study, 8 different Artificial Neural Network (ANN) were developed for the purpose of prediction of AOp. The ANN models were trained using different variants of Particle Swarm Optimization (PSO) algorithm. AOp predictions were also made using an empirical equation, as suggested by United States Bureau of Mines (USBM), to serve as a benchmark. In order to develop the models, 76 blasting operations in Hulu Langat were investigated. All the ANN models were found to outperform the USBM equation in three performance metrics; root mean square error (RMSE), mean absolute percentage error (MAPE) and coefficient of determination (R2). Using a performance ranking method, MSO-Rand-Mut was determined to be the best prediction model for AOp with a performance metric of RMSE=2.18, MAPE=1.73% and R2=0.97. The result shows that ANN models trained using PSO are capable of predicting AOp with great accuracy.
A trust region-based approach to optimize triple response systems
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen
2014-05-01
This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Model selection for integrated pest management with stochasticity.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2018-04-07
In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dühring, Sybille; Ewald, Jan; Germerodt, Sebastian; Kaleta, Christoph; Dandekar, Thomas; Schuster, Stefan
2017-07-01
The release of fungal cells following macrophage phagocytosis, called non-lytic expulsion, is reported for several fungal pathogens. On one hand, non-lytic expulsion may benefit the fungus in escaping the microbicidal environment of the phagosome. On the other hand, the macrophage could profit in terms of avoiding its own lysis and being able to undergo proliferation. To analyse the causes of non-lytic expulsion and the relevance of macrophage proliferation in the macrophage- Candida albicans interaction, we employ Evolutionary Game Theory and dynamic optimization in a sequential manner. We establish a game-theoretical model describing the different strategies of the two players after phagocytosis. Depending on the parameter values, we find four different Nash equilibria and determine the influence of the systems state of the host upon the game. As our Nash equilibria are a direct consequence of the model parameterization, we can depict several biological scenarios. A parameter region, where the host response is robust against the fungal infection, is determined. We further apply dynamic optimization to analyse whether macrophage mitosis is relevant in the host-pathogen interaction of macrophages and C. albicans For this, we study the population dynamics of the macrophage- C. albicans interactions and the corresponding optimal controls for the macrophages, indicating the best macrophage strategy of switching from proliferation to attacking fungal cells. © 2017 The Author(s).
The anesthetic action of some polyhalogenated ethers-Monte Carlo method based QSAR study.
Golubović, Mlađan; Lazarević, Milan; Zlatanović, Dragan; Krtinić, Dane; Stoičkov, Viktor; Mladenović, Bojan; Milić, Dragan J; Sokolović, Dušan; Veselinović, Aleksandar M
2018-04-13
Up to this date, there has been an ongoing debate about the mode of action of general anesthetics, which have postulated many biological sites as targets for their action. However, postoperative nausea and vomiting are common problems in which inhalational agents may have a role in their development. When a mode of action is unknown, QSAR modelling is essential in drug development. To investigate the aspects of their anesthetic, QSAR models based on the Monte Carlo method were developed for a set of polyhalogenated ethers. Until now, their anesthetic action has not been completely defined, although some hypotheses have been suggested. Therefore, a QSAR model should be developed on molecular fragments that contribute to anesthetic action. QSAR models were built on the basis of optimal molecular descriptors based on the SMILES notation and local graph invariants, whereas the Monte Carlo optimization method with three random splits into the training and test set was applied for model development. Different methods, including novel Index of ideality correlation, were applied for the determination of the robustness of the model and its predictive potential. The Monte Carlo optimization process was capable of being an efficient in silico tool for building up a robust model of good statistical quality. Molecular fragments which have both positive and negative influence on anesthetic action were determined. The presented study can be useful in the search for novel anesthetics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Tian, Kuang-da; Qiu, Kai-xian; Li, Zu-hong; Lü, Ya-qiong; Zhang, Qiu-ju; Xiong, Yan-mei; Min, Shun-geng
2014-12-01
The purpose of the present paper is to determine calcium and magnesium in tobacco using NIR combined with least squares-support vector machine (LS-SVM). Five hundred ground and dried tobacco samples from Qujing city, Yunnan province, China, were surveyed by a MATRIX-I spectrometer (Bruker Optics, Bremen, Germany). At the beginning of data processing, outliers of samples were eliminated for stability of the model. The rest 487 samples were divided into several calibration sets and validation sets according to a hybrid modeling strategy. Monte-Carlo cross validation was used to choose the best spectral preprocess method from multiplicative scatter correction (MSC), standard normal variate transformation (SNV), S-G smoothing, 1st derivative, etc., and their combinations. To optimize parameters of LS-SVM model, the multilayer grid search and 10-fold cross validation were applied. The final LS-SVM models with the optimizing parameters were trained by the calibration set and accessed by 287 validation samples picked by Kennard-Stone method. For the quantitative model of calcium in tobacco, Savitzky-Golay FIR smoothing with frame size 21 showed the best performance. The regularization parameter λ of LS-SVM was e16.11, while the bandwidth of the RBF kernel σ2 was e8.42. The determination coefficient for prediction (Rc(2)) was 0.9755 and the determination coefficient for prediction (Rp(2)) was 0.9422, better than the performance of PLS model (Rc(2)=0.9593, Rp(2)=0.9344). For the quantitative analysis of magnesium, SNV made the regression model more precise than other preprocess. The optimized λ was e15.25 and σ2 was e6.32. Rc(2) and Rp(2) were 0.9961 and 0.9301, respectively, better than PLS model (Rc(2)=0.9716, Rp(2)=0.8924). After modeling, the whole progress of NIR scan and data analysis for one sample was within tens of seconds. The overall results show that NIR spectroscopy combined with LS-SVM can be efficiently utilized for rapid and accurate analysis of calcium and magnesium in tobacco.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scioletti, Michael S.; Newman, Alexandra M.; Goodman, Johanna K.
Renewable energy technologies, specifically, solar photovoltaic cells, combined with battery storage and diesel generators, form a hybrid system capable of independently powering remote locations, i.e., those isolated from larger grids. If sized correctly, hybrid systems reduce fuel consumption compared to diesel generator-only alternatives. We present an optimization model for establishing a hybrid power design and dispatch strategy for remote locations, such as a military forward operating base, that models the acquisition of different power technologies as integer variables and their operation using nonlinear expressions. Our cost-minimizing, nonconvex, mixed-integer, nonlinear program contains a detailed battery model. Due to its complexities, wemore » present linearizations, which include exact and convex under-estimation techniques, and a heuristic, which determines an initial feasible solution to serve as a “warm start” for the solver. We determine, in a few hours at most, solutions within 5% of optimality for a candidate set of technologies; these solutions closely resemble those from the nonlinear model. Lastly, our instances contain real data spanning a yearly horizon at hour fidelity and demonstrate that a hybrid system could reduce fuel consumption by as much as 50% compared to a generator-only solution.« less
Running with horizontal pulling forces: the benefits of towing.
Grabowski, Alena M; Kram, Rodger
2008-10-01
Towing, or running with a horizontal pulling force, is a common technique used by adventure racing teams. During an adventure race, the slowest person on a team determines the team's overall performance. To improve overall performance, a faster runner tows a slower runner with an elastic cord attached to their waists. Our purpose was to create and validate a model that predicts the optimal towing force needed by two runners to achieve their best overall performance. We modeled the effects of towing forces between two runners that differ in solo 10-km performance time and/or body mass. We calculated the overall time that could be saved with towing for running distances of 10, 20, and 42.2-km based on equations from previous research. Then, we empirically tested our 10-km model on 15 runners. Towing improved overall running performance considerably and our model accurately predicted this performance improvement. For example, if two runners (a 70 kg runner with a 35 min solo 10-km time and a 70-kg runner with a 50-min solo 10-km time) maintain an optimal towing force throughout a 10-km race, they can improve overall performance by 15%, saving almost 8 min. Ultimately, the race performance time and body mass of each runner determine the optimal towing force.
Scioletti, Michael S.; Newman, Alexandra M.; Goodman, Johanna K.; ...
2017-05-08
Renewable energy technologies, specifically, solar photovoltaic cells, combined with battery storage and diesel generators, form a hybrid system capable of independently powering remote locations, i.e., those isolated from larger grids. If sized correctly, hybrid systems reduce fuel consumption compared to diesel generator-only alternatives. We present an optimization model for establishing a hybrid power design and dispatch strategy for remote locations, such as a military forward operating base, that models the acquisition of different power technologies as integer variables and their operation using nonlinear expressions. Our cost-minimizing, nonconvex, mixed-integer, nonlinear program contains a detailed battery model. Due to its complexities, wemore » present linearizations, which include exact and convex under-estimation techniques, and a heuristic, which determines an initial feasible solution to serve as a “warm start” for the solver. We determine, in a few hours at most, solutions within 5% of optimality for a candidate set of technologies; these solutions closely resemble those from the nonlinear model. Lastly, our instances contain real data spanning a yearly horizon at hour fidelity and demonstrate that a hybrid system could reduce fuel consumption by as much as 50% compared to a generator-only solution.« less
Operations research applications in nuclear energy
NASA Astrophysics Data System (ADS)
Johnson, Benjamin Lloyd
This dissertation consists of three papers; the first is published in Annals of Operations Research, the second is nearing submission to INFORMS Journal on Computing, and the third is the predecessor of a paper nearing submission to Progress in Nuclear Energy. We apply operations research techniques to nuclear waste disposal and nuclear safeguards. Although these fields are different, they allow us to showcase some benefits of using operations research techniques to enhance nuclear energy applications. The first paper, "Optimizing High-Level Nuclear Waste Disposal within a Deep Geologic Repository," presents a mixed-integer programming model that determines where to place high-level nuclear waste packages in a deep geologic repository to minimize heat load concentration. We develop a heuristic that increases the size of solvable model instances. The second paper, "Optimally Configuring a Measurement System to Detect Diversions from a Nuclear Fuel Cycle," introduces a simulation-optimization algorithm and an integer-programming model to find the best, or near-best, resource-limited nuclear fuel cycle measurement system with a high degree of confidence. Given location-dependent measurement method precisions, we (i) optimize the configuration of n methods at n locations of a hypothetical nuclear fuel cycle facility, (ii) find the most important location at which to improve method precision, and (iii) determine the effect of measurement frequency on near-optimal configurations and objective values. Our results correspond to existing outcomes but we obtain them at least an order of magnitude faster. The third paper, "Optimizing Nuclear Material Control and Accountability Measurement Systems," extends the integer program from the second paper to locate measurement methods in a larger, hypothetical nuclear fuel cycle scenario given fixed purchase and utilization budgets. This paper also presents two mixed-integer quadratic programming models to increase the precision of existing methods given a fixed improvement budget and to reduce the measurement uncertainty in the system while limiting improvement costs. We quickly obtain similar or better solutions compared to several intuitive analyses that take much longer to perform.
Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe
2016-01-01
Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young’s modulus values. For each combination of these variables, the explicit equation of the porosity distribution law–i.e the law that describes the pore dimensions in function of the spatial coordinates–was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry of functionally graded scaffolds based on mechanobiological criteria. PMID:26771746
Mathematical models for optimization of the centrifugal stage of a refrigerating compressor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuzhdin, A.S.
1987-09-01
The authors describe a general approach to the creating of mathematical models of energy and head losses in the flow part of the centrifugal compressor. The mathematical model of the pressure head and efficiency of a two-section stage proposed in this paper is meant for determining its characteristics for the assigned geometric dimensions and for optimizing by variance calculations. Characteristic points on the plot of velocity distribution over the margin of the vanes of the impeller and the diffuser of the centrifugal stage with a combined diffuser are presented. To assess the reliability of the mathematical model the authors comparedmore » some calculated data with the experimental ones.« less
Planning Models for Tuberculosis Control Programs
Chorba, Ronald W.; Sanders, J. L.
1971-01-01
A discrete-state, discrete-time simulation model of tuberculosis is presented, with submodels of preventive interventions. The model allows prediction of the prevalence of the disease over the simulation period. Preventive and control programs and their optimal budgets may be planned by using the model for cost-benefit analysis: costs are assigned to the program components and disease outcomes to determine the ratio of program expenditures to future savings on medical and socioeconomic costs of tuberculosis. Optimization is achieved by allocating funds in successive increments to alternative program components in simulation and identifying those components that lead to the greatest reduction in prevalence for the given level of expenditure. The method is applied to four hypothetical disease prevalence situations. PMID:4999448
NASA Astrophysics Data System (ADS)
Shi, Jin-Xing; Ohmura, Keiichiro; Shimoda, Masatoshi; Lei, Xiao-Wen
2018-07-01
In recent years, shape design of graphene sheets (GSs) by introducing topological defects for enhancing their mechanical behaviors has attracted the attention of scholars. In the present work, we propose a consistent methodology for optimal shape design of GSs using a combination of the molecular mechanics (MM) method, the non-parametric shape optimization method, the phase field crystal (PFC) method, Voronoi tessellation, and molecular dynamics (MD) simulation to maximize their fundamental frequencies. At first, we model GSs as continuum frame models using a link between the MM method and continuum mechanics. Then, we carry out optimal shape design of GSs in fundamental frequency maximization problem based on a developed shape optimization method for frames. However, the obtained optimal shapes of GSs only consisting of hexagonal carbon rings are unstable that do not satisfy the principle of least action, so we relocate carbon atoms on the optimal shapes by introducing topological defects using the PFC method and Voronoi tessellation. At last, we perform the structural relaxation through MD simulation to determine the final optimal shapes of GSs. We design two examples of GSs and the optimal results show that the fundamental frequencies of GSs can be significantly enhanced according to the optimal shape design methodology.
Dynamic optimization of metabolic networks coupled with gene expression.
Waldherr, Steffen; Oyarzún, Diego A; Bockmayr, Alexander
2015-01-21
The regulation of metabolic activity by tuning enzyme expression levels is crucial to sustain cellular growth in changing environments. Metabolic networks are often studied at steady state using constraint-based models and optimization techniques. However, metabolic adaptations driven by changes in gene expression cannot be analyzed by steady state models, as these do not account for temporal changes in biomass composition. Here we present a dynamic optimization framework that integrates the metabolic network with the dynamics of biomass production and composition. An approximation by a timescale separation leads to a coupled model of quasi-steady state constraints on the metabolic reactions, and differential equations for the substrate concentrations and biomass composition. We propose a dynamic optimization approach to determine reaction fluxes for this model, explicitly taking into account enzyme production costs and enzymatic capacity. In contrast to the established dynamic flux balance analysis, our approach allows predicting dynamic changes in both the metabolic fluxes and the biomass composition during metabolic adaptations. Discretization of the optimization problems leads to a linear program that can be efficiently solved. We applied our algorithm in two case studies: a minimal nutrient uptake network, and an abstraction of core metabolic processes in bacteria. In the minimal model, we show that the optimized uptake rates reproduce the empirical Monod growth for bacterial cultures. For the network of core metabolic processes, the dynamic optimization algorithm predicted commonly observed metabolic adaptations, such as a diauxic switch with a preference ranking for different nutrients, re-utilization of waste products after depletion of the original substrate, and metabolic adaptation to an impending nutrient depletion. These examples illustrate how dynamic adaptations of enzyme expression can be predicted solely from an optimization principle. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine
2018-03-01
This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.
MATLAB/Simulink Pulse-Echo Ultrasound System Simulator Based on Experimentally Validated Models.
Kim, Taehoon; Shin, Sangmin; Lee, Hyongmin; Lee, Hyunsook; Kim, Heewon; Shin, Eunhee; Kim, Suhwan
2016-02-01
A flexible clinical ultrasound system must operate with different transducers, which have characteristic impulse responses and widely varying impedances. The impulse response determines the shape of the high-voltage pulse that is transmitted and the specifications of the front-end electronics that receive the echo; the impedance determines the specification of the matching network through which the transducer is connected. System-level optimization of these subsystems requires accurate modeling of pulse-echo (two-way) response, which in turn demands a unified simulation of the ultrasonics and electronics. In this paper, this is realized by combining MATLAB/Simulink models of the high-voltage transmitter, the transmission interface, the acoustic subsystem which includes wave propagation and reflection, the receiving interface, and the front-end receiver. To demonstrate the effectiveness of our simulator, the models are experimentally validated by comparing the simulation results with the measured data from a commercial ultrasound system. This simulator could be used to quickly provide system-level feedback for an optimized tuning of electronic design parameters.
Two-phase strategy of controlling motor coordination determined by task performance optimality.
Shimansky, Yury P; Rand, Miya K
2013-02-01
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.
Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data
García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio
2016-01-01
Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed. PMID:28787882
García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio
2016-01-28
Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC-MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc . Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC-MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.
2014-01-01
The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated. PMID:24672351
Global optimization framework for solar building design
NASA Astrophysics Data System (ADS)
Silva, N.; Alves, N.; Pascoal-Faria, P.
2017-07-01
The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.
Contribution to the optimal shape design of two-dimensional internal flows with embedded shocks
NASA Technical Reports Server (NTRS)
Iollo, Angelo; Salas, Manuel D.
1995-01-01
We explore the practicability of optimal shape design for flows modeled by the Euler equations. We define a functional whose minimum represents the optimality condition. The gradient of the functional with respect to the geometry is calculated with the Lagrange multipliers, which are determined by solving a co-state equation. The optimization problem is then examined by comparing the performance of several gradient-based optimization algorithms. In this formulation, the flow field can be computed to an arbitrary order of accuracy. Finally, some results for internal flows with embedded shocks are presented, including a case for which the solution to the inverse problem does not belong to the design space.
Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks
Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo
2012-01-01
Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190
Optimising the location of antenatal classes.
Tomintz, Melanie N; Clarke, Graham P; Rigby, Janette E; Green, Josephine M
2013-01-01
To combine microsimulation and location-allocation techniques to determine antenatal class locations which minimise the distance travelled from home by potential users. Microsimulation modeling and location-allocation modeling. City of Leeds, UK. Potential users of antenatal classes. An individual-level microsimulation model was built to estimate the number of births for small areas by combining data from the UK Census 2001 and the Health Survey for England 2006. Using this model as a proxy for service demand, we then used a location-allocation model to optimize locations. Different scenarios show the advantage of combining these methods to optimize (re)locating antenatal classes and therefore reduce inequalities in accessing services for pregnant women. Use of these techniques should lead to better use of resources by allowing planners to identify optimal locations of antenatal classes which minimise women's travel. These results are especially important for health-care planners tasked with the difficult issue of targeting scarce resources in a cost-efficient, but also effective or accessible, manner. (169 words). Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
Pfefer, T Joshua; Wang, Quanzeng; Drezek, Rebekah A
2011-11-01
Computational approaches for simulation of light-tissue interactions have provided extensive insight into biophotonic procedures for diagnosis and therapy. However, few studies have addressed simulation of time-resolved fluorescence (TRF) in tissue and none have combined Monte Carlo simulations with standard TRF processing algorithms to elucidate approaches for cancer detection in layered biological tissue. In this study, we investigate how illumination-collection parameters (e.g., collection angle and source-detector separation) influence the ability to measure fluorophore lifetime and tissue layer thickness. Decay curves are simulated with a Monte Carlo TRF light propagation model. Multi-exponential iterative deconvolution is used to determine lifetimes and fractional signal contributions. The ability to detect changes in mucosal thickness is optimized by probes that selectively interrogate regions superficial to the mucosal-submucosal boundary. Optimal accuracy in simultaneous determination of lifetimes in both layers is achieved when each layer contributes 40-60% of the signal. These results indicate that depth-selective approaches to TRF have the potential to enhance disease detection in layered biological tissue and that modeling can play an important role in probe design optimization. Published by Elsevier Ireland Ltd.
NASA Technical Reports Server (NTRS)
Momoh, James; Chattopadhyay, Deb; Basheer, Omar Ali AL
1996-01-01
The space power system has two sources of energy: photo-voltaic blankets and batteries. The optimal power management problem on-board has two broad operations: off-line power scheduling to determine the load allocation schedule of the next several hours based on the forecast of load and solar power availability. The nature of this study puts less emphasis on speed requirement for computation and more importance on the optimality of the solution. The second category problem, on-line power rescheduling, is needed in the event of occurrence of a contingency to optimally reschedule the loads to minimize the 'unused' or 'wasted' energy while keeping the priority on certain type of load and minimum disturbance of the original optimal schedule determined in the first-stage off-line study. The computational performance of the on-line 'rescheduler' is an important criterion and plays a critical role in the selection of the appropriate tool. The Howard University Center for Energy Systems and Control has developed a hybrid optimization-expert systems based power management program. The pre-scheduler has been developed using a non-linear multi-objective optimization technique called the Outer Approximation method and implemented using the General Algebraic Modeling System (GAMS). The optimization model has the capability of dealing with multiple conflicting objectives viz. maximizing energy utilization, minimizing the variation of load over a day, etc. and incorporates several complex interaction between the loads in a space system. The rescheduling is performed using an expert system developed in PROLOG which utilizes a rule-base for reallocation of the loads in an emergency condition viz. shortage of power due to solar array failure, increase of base load, addition of new activity, repetition of old activity etc. Both the modules handle decision making on battery charging and discharging and allocation of loads over a time-horizon of a day divided into intervals of 10 minutes. The models have been extensively tested using a case study for the Space Station Freedom and the results for the case study will be presented. Several future enhancements of the pre-scheduler and the 'rescheduler' have been outlined which include graphic analyzer for the on-line module, incorporating probabilistic considerations, including spatial location of the loads and the connectivity using a direct current (DC) load flow model.
Chung, Misook L; Bakas, Tamilyn; Plue, Laura D; Williams, Linda S
2016-01-01
Depressive symptoms are common in stroke survivors and their family caregivers. Given the interdependent relationship between the members of dyads in poststroke management, improving depressive symptoms in dyads may depend on their partner's characteristics. Self-esteem, optimism, and perceived control, all known to be associated with depressive symptoms in an individual, may also contribute to their partner's depressive symptoms. The purpose of this study is to examine actor and partner effects of self-esteem, optimism, and perceived control on depression in stroke survivors and their spousal caregivers. A total of 112 ischemic stroke survivors (78% white, 34% women; mean age, 62.5 ± 12.3 years) and their spouses (mean age, 60.6 ± 12.9 years) completed surveys in which depressive symptoms, self-esteem, optimism, and perceived control were assessed using the Patient Health Questionnaire, the Rosenberg Self-esteem Scale, the Revised Life Orientation Test, and the Sense of Control Scale. Multilevel modeling, actor-partner interdependence model regression was used to determine influences on depressive symptoms within the dyad. Individuals with lower self-esteem, optimism, and perceived control had higher levels of depressive symptoms. Stroke survivors whose spouses had lower levels of self-esteem (B = -0.338, P < .001) and optimism (B = -0.361, P < .027) tended to have higher levels of depressive symptoms. Spouses whose stroke survivors had lower levels of self-esteem (B = -0.047, P = .036) also had higher levels of depressive symptoms. We found significant partner effects of self-esteem on depression for both members and partner effect of optimism on patient's depressive symptoms. These findings suggest that further research is needed to determine if dyadic interventions may help to improve self-esteem, optimism, and depressive symptoms in both patients and their caregivers.
Chung, Misook L.; Bakas, Tamilyn; Plue, Laura D.; Williams, Linda S.
2014-01-01
Background Depressive symptoms are common in stroke survivors and their family caregivers. Given the interdependent relationship between the members of dyads in post-stroke management, improving depressive symptoms in dyads may depend on their partner's characteristics. Self-esteem, optimism, and perceived control, all known to be associated with depressive symptoms in an individual, may also contribute to their partner's depressive symptoms. Purpose The purpose of this study was to examine actor and partner effects of self-esteem, optimism, and perceived control on depression in the stroke survivors and their spousal caregivers. Methods A total of 112 ischemic stroke survivors (78% white, 34% female, mean age 62.5 ± 12.3) and their spouses (mean age 60.6 ±12.9) completed surveys in which depressive symptoms, self-esteem, optimism, and perceived control were assessed using the Patient Health Questionnaire, the Rosenberg Self-esteem Scale, the Revised Life Orientation Test, and the Sense of Control Scale. Multilevel modeling, actor-partner interdependence model regression was used to determine influences on depressive symptoms within the dyad. Results Individuals with lower self-esteem, optimism, and perceived control had higher levels of depressive symptoms. Stroke survivors whose spouses had lower levels of self-esteem (B= −.338, P<.001) and optimism (B= −.361, P<.027) tended to have higher levels of depressive symptoms. Spouses whose stroke survivors had lower levels of self-esteem (B= −.047, P=.036) also had higher levels of depressive symptoms. Conclusion We found significant partner effects of self-esteem on depression for both members and partner effect of optimism on patient's depressive symptoms. These findings suggest that further research is needed to determine if dyadic interventions may help to improve self-esteem, optimism, and depressive symptoms in both patients and their caregivers. PMID:25658182
NASA Astrophysics Data System (ADS)
Knapp, Julia L. A.; Cirpka, Olaf A.
2017-06-01
The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.
NASA Astrophysics Data System (ADS)
Vitório, Paulo Cezar; Leonel, Edson Denner
2017-12-01
The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the optimal solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology optimization aims for achieving the optimal structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the optimal geometry of 2D structures using the coupling of the boundary element method (BEM) and the level set method (LSM). The proposed algorithm consists of mechanical modelling, topology optimization approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology optimization is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the optimization process, according to the intensity of Von Mises stress. The developed optimization model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.
Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa
2013-05-10
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Lozia, Z.; Zdanowicz, P.
2016-09-01
The paper presents the optimization of damping in the passive suspension system of a motor vehicle moving rectilinearly with a constant speed on a road with rough surface of random irregularities, described according to the ISO classification. Two quarter-car 2DoF models, linear and non-linear, were used; in the latter, nonlinearities of spring characteristics of the suspension system and pneumatic tyres, sliding friction in the suspension system, and wheel lift-off were taken into account. The smoothing properties of vehicle tyres were represented in both models. The calculations were carried out for three roads of different quality, with simulating four vehicle speeds. Statistical measures of vertical vehicle body vibrations and of changes in the vertical tyre/road contact force were used as the criteria of system optimization and model comparison. The design suspension displacement limit was also taken into account. The optimum suspension damping coefficient was determined and the impact of undesirable sliding friction in the suspension system on the calculation results was estimated. The results obtained make it possible to evaluate the impact of the structure and complexity of the model used on the results of the optimization.
NASA Astrophysics Data System (ADS)
Olivia, G.; Santoso, A.; Prayogo, D. N.
2017-11-01
Nowadays, the level of competition between supply chains is getting tighter and a good coordination system between supply chains members is very crucial in solving the issue. This paper focused on a model development of coordination system between single supplier and buyers in a supply chain as a solution. Proposed optimization model was designed to determine the optimal number of deliveries from a supplier to buyers in order to minimize the total cost over a planning horizon. Components of the total supply chain cost consist of transportation costs, handling costs of supplier and buyers and also stock out costs. In the proposed optimization model, the supplier can supply various types of items to retailers whose item demand patterns are probabilistic. Sensitivity analysis of the proposed model was conducted to test the effect of changes in transport costs, handling costs and production capacities of the supplier. The results of the sensitivity analysis showed a significant influence on the changes in the transportation cost, handling costs and production capacity to the decisions of the optimal numbers of product delivery for each item to the buyers.
Application of SNODAS and hydrologic models to enhance entropy-based snow monitoring network design
NASA Astrophysics Data System (ADS)
Keum, Jongho; Coulibaly, Paulin; Razavi, Tara; Tapsoba, Dominique; Gobena, Adam; Weber, Frank; Pietroniro, Alain
2018-06-01
Snow has a unique characteristic in the water cycle, that is, snow falls during the entire winter season, but the discharge from snowmelt is typically delayed until the melting period and occurs in a relatively short period. Therefore, reliable observations from an optimal snow monitoring network are necessary for an efficient management of snowmelt water for flood prevention and hydropower generation. The Dual Entropy and Multiobjective Optimization is applied to design snow monitoring networks in La Grande River Basin in Québec and Columbia River Basin in British Columbia. While the networks are optimized to have the maximum amount of information with minimum redundancy based on entropy concepts, this study extends the traditional entropy applications to the hydrometric network design by introducing several improvements. First, several data quantization cases and their effects on the snow network design problems were explored. Second, the applicability the Snow Data Assimilation System (SNODAS) products as synthetic datasets of potential stations was demonstrated in the design of the snow monitoring network of the Columbia River Basin. Third, beyond finding the Pareto-optimal networks from the entropy with multi-objective optimization, the networks obtained for La Grande River Basin were further evaluated by applying three hydrologic models. The calibrated hydrologic models simulated discharges using the updated snow water equivalent data from the Pareto-optimal networks. Then, the model performances for high flows were compared to determine the best optimal network for enhanced spring runoff forecasting.
NASA Astrophysics Data System (ADS)
Nuh, M. Z.; Nasir, N. F.
2017-08-01
Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.
Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain
NASA Astrophysics Data System (ADS)
Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida
2013-04-01
Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.
Mwanga, Gasper G; Haario, Heikki; Capasso, Vicenzo
2015-03-01
The main scope of this paper is to study the optimal control practices of malaria, by discussing the implementation of a catalog of optimal control strategies in presence of parameter uncertainties, which is typical of infectious diseases data. In this study we focus on a deterministic mathematical model for the transmission of malaria, including in particular asymptomatic carriers and two age classes in the human population. A partial qualitative analysis of the relevant ODE system has been carried out, leading to a realistic threshold parameter. For the deterministic model under consideration, four possible control strategies have been analyzed: the use of Long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic and asymptomatic individuals. The numerical results show that using optimal control the disease can be brought to a stable disease free equilibrium when all four controls are used. The Incremental Cost-Effectiveness Ratio (ICER) for all possible combinations of the disease-control measures is determined. The numerical simulations of the optimal control in the presence of parameter uncertainty demonstrate the robustness of the optimal control: the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the designing of cost-effective strategies for disease controls with multiple interventions, even under considerable uncertainty of model parameters. Copyright © 2014 Elsevier Inc. All rights reserved.
Determination of a temperature sensor location for monitoring weld pool size in GMAW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boo, K.S.; Cho, H.S.
1994-11-01
This paper describes a method of determining the optimal sensor location to measure weldment surface temperature, which has a close correlation with weld pool size in the gas metal arc (GMA) welding process. Due to the inherent complexity and nonlinearity in the GMA welding process, the relationship between the weldment surface temperature and the weld pool size varies with the point of measurement. This necessitates an optimal selection of the measurement point to minimize the process nonlinearity effect in estimating the weld pool size from the measured temperature. To determine the optimal sensor location on the top surface of themore » weldment, the correlation between the measured temperature and the weld pool size is analyzed. The analysis is done by calculating the correlation function, which is based upon an analytical temperature distribution model. To validate the optimal sensor location, a series of GMA bead-on-plate welds are performed on a medium-carbon steel under various welding conditions. A comparison study is given in detail based upon the simulation and experimental results.« less
Display/control requirements for VTOL aircraft
NASA Technical Reports Server (NTRS)
Hoffman, W. C.; Curry, R. E.; Kleinman, D. L.; Hollister, W. M.; Young, L. R.
1975-01-01
Quantative metrics were determined for system control performance, workload for control, monitoring performance, and workload for monitoring. Pilot tasks were allocated for navigation and guidance of automated commercial V/STOL aircraft in all weather conditions using an optimal control model of the human operator to determine display elements and design.
Optimizing Sustainable Geothermal Heat Extraction
NASA Astrophysics Data System (ADS)
Patel, Iti; Bielicki, Jeffrey; Buscheck, Thomas
2016-04-01
Geothermal heat, though renewable, can be depleted over time if the rate of heat extraction exceeds the natural rate of renewal. As such, the sustainability of a geothermal resource is typically viewed as preserving the energy of the reservoir by weighing heat extraction against renewability. But heat that is extracted from a geothermal reservoir is used to provide a service to society and an economic gain to the provider of that service. For heat extraction used for market commodities, sustainability entails balancing the rate at which the reservoir temperature renews with the rate at which heat is extracted and converted into economic profit. We present a model for managing geothermal resources that combines simulations of geothermal reservoir performance with natural resource economics in order to develop optimal heat mining strategies. Similar optimal control approaches have been developed for managing other renewable resources, like fisheries and forests. We used the Non-isothermal Unsaturated-saturated Flow and Transport (NUFT) model to simulate the performance of a sedimentary geothermal reservoir under a variety of geologic and operational situations. The results of NUFT are integrated into the optimization model to determine the extraction path over time that maximizes the net present profit given the performance of the geothermal resource. Results suggest that the discount rate that is used to calculate the net present value of economic gain is a major determinant of the optimal extraction path, particularly for shallower and cooler reservoirs, where the regeneration of energy due to the natural geothermal heat flux is a smaller percentage of the amount of energy that is extracted from the reservoir.
Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.
2017-12-01
We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.
Frustration in protein elastic network models
NASA Astrophysics Data System (ADS)
Lezon, Timothy; Bahar, Ivet
2010-03-01
Elastic network models (ENMs) are widely used for studying the equilibrium dynamics of proteins. The most common approach in ENM analysis is to adopt a uniform force constant or a non-specific distance dependent function to represent the force constant strength. Here we discuss the influence of sequence and structure in determining the effective force constants between residues in ENMs. Using a novel method based on entropy maximization, we optimize the force constants such that they exactly reporduce a subset of experimentally determined pair covariances for a set of proteins. We analyze the optimized force constants in terms of amino acid types, distances, contact order and secondary structure, and we demonstrate that including frustrated interactions in the ENM is essential for accurately reproducing the global modes in the middle of the frequency spectrum.
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Kessler, K. M.
1975-01-01
The selection of the structure of optimal control type models for the human gunner in an anti aircraft artillery system is considered. Several structures within the LQG framework may be formulated. Two basic types are considered: (1) kth derivative controllers; and (2) proportional integral derivative (P-I-D) controllers. It is shown that a suitable criterion for model structure determination can be based on the ensemble statistics of the tracking error. In the case when the ensemble tracking steady state error is zero, it is suggested that a P-I-D controller formulation be used in preference to the kth derivative controller.
NASA Technical Reports Server (NTRS)
Kuchynka, P.; Laskar, J.; Fienga, A.
2011-01-01
Mars ranging observations are available over the past 10 years with an accuracy of a few meters. Such precise measurements of the Earth-Mars distance provide valuable constraints on the masses of the asteroids perturbing both planets. Today more than 30 asteroid masses have thus been estimated from planetary ranging data (see [1] and [2]). Obtaining unbiased mass estimations is nevertheless difficult. Various systematic errors can be introduced by imperfect reduction of spacecraft tracking observations to planetary ranging data. The large number of asteroids and the limited a priori knowledge of their masses is also an obstacle for parameter selection. Fitting in a model a mass of a negligible perturber, or on the contrary omitting a significant perturber, will induce important bias in determined asteroid masses. In this communication, we investigate a simplified version of the mass determination problem. Instead of planetary ranging observations from spacecraft or radar data, we consider synthetic ranging observations generated with the INPOP [2] ephemeris for a test model containing 25000 asteroids. We then suggest a method for optimal parameter selection and estimation in this simplified framework.
Determinant quantum Monte Carlo study of d -wave pairing in the plaquette Hubbard hamiltonian
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ying, T.; Mondaini, R.; Sun, X. D.
2014-08-13
We used the determinant Quantum Monte Carlo (DQMC) to determine the pairing and magnetic response for a Hubbard model built up from four-site clusters - a two-dimensional square lattice consisting of elemental 2x2 plaquettes with hopping t and on-site repulsion U coupled by an interplaquette hopping t' ≤ t. Superconductivity in this geometry has previously been studied by a variety of analytic and numeric methods, with differing conclusions concerning whether the pairing correlations and transition temperature are raised near half-filling by the inhomogeneous hopping or not. For U/t = 4, DQMC indicates an optimal t'/t ≈ 0.4 at which themore » pairing vertex is most attractive. We also found that optimal t'/t increases with U/t. We then contrast our results for this plaquette model with a Hamiltonian which instead involves a regular pattern of site energies whose large site energy limit is the three band CuO 2 model; we show that there the inhomogeneity rapidly, and monotonically, suppresses pairing.« less
Pinisetty, D; Huang, C; Dong, Q; Tiersch, T R; Devireddy, R V
2005-06-01
This study reports the subzero water transport characteristics (and empirically determined optimal rates for freezing) of sperm cells of live-bearing fishes of the genus Xiphophorus, specifically those of the southern platyfish Xiphophorus maculatus. These fishes are valuable models for biomedical research and are commercially raised as ornamental fish for use in aquariums. Water transport during freezing of X. maculatus sperm cell suspensions was obtained using a shape-independent differential scanning calorimeter technique in the presence of extracellular ice at a cooling rate of 20 degrees C/min in three different media: (1) Hanks' balanced salt solution (HBSS) without cryoprotective agents (CPAs); (2) HBSS with 14% (v/v) glycerol, and (3) HBSS with 10% (v/v) dimethyl sulfoxide (DMSO). The sperm cell was modeled as a cylinder with a length of 52.35 microm and a diameter of 0.66 microm with an osmotically inactive cell volume (Vb) of 0.6 V0, where V0 is the isotonic or initial cell volume. This translates to a surface area, SA to initial water volume, WV ratio of 15.15 microm(-1). By fitting a model of water transport to the experimentally determined volumetric shrinkage data, the best fit membrane permeability parameters (reference membrane permeability to water at 0 degrees C, Lpg or Lpg [cpa] and the activation energy, E(Lp) or E(Lp) [cpa]) were found to range from: Lpg or Lpg [cpa] = 0.0053-0.0093 microm/minatm; E(Lp) or E(Lp) [cpa] = 9.79-29.00 kcal/mol. By incorporating these membrane permeability parameters in a recently developed generic optimal cooling rate equation (optimal cooling rate, [Formula: see text] where the units of B(opt) are degrees C/min, E(Lp) or E(Lp) [cpa] are kcal/mol, L(pg) or L(pg) [cpa] are microm/minatm and SA/WV are microm(-1)), we determined the optimal rates of freezing X. maculatus sperm cells to be 28 degrees C/min (in HBSS), 47 degrees C/min (in HBSS+14% glycerol) and 36 degrees C/min (in HBSS+10% DMSO). Preliminary empirical experiments suggest that the optimal rate of freezing X. maculatus sperm in the presence of 14% glycerol to be approximately 25 degrees C/min. Possible reasons for the observed discrepancy between the theoretically predicted and experimentally determined optimal rates of freezing X. maculatus sperm cells are discussed.
Prediction uncertainty and optimal experimental design for learning dynamical systems.
Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P
2016-06-01
Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.