NASA Astrophysics Data System (ADS)
Widhiarso, Wahyu; Rosyidi, Cucuk Nur
2018-02-01
Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.
A Decision-making Model for a Two-stage Production-delivery System in SCM Environment
NASA Astrophysics Data System (ADS)
Feng, Ding-Zhong; Yamashiro, Mitsuo
A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.
Hu, Wenfa; He, Xinhua
2014-01-01
The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated.
Optimal Sizing of Energy Storage for Community Microgrids Considering Building Thermal Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Li, Zhi; Starke, Michael R.
This paper proposes an optimization model for the optimal sizing of energy storage in community microgrids considering the building thermal dynamics and customer comfort preference. The proposed model minimizes the annualized cost of the community microgrid, including energy storage investment, purchased energy cost, demand charge, energy storage degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation. The decision variables are the power and energy capacity of invested energy storage. In particular, we assume the heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently by the microgrid central controller while maintainingmore » the indoor temperature in the comfort range set by customers. For this purpose, the detailed thermal dynamic characteristics of buildings have been integrated into the optimization model. Numerical simulation shows significant cost reduction by the proposed model. The impacts of various costs on the optimal solution are investigated by sensitivity analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...
2017-10-10
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
A system-level cost-of-energy wind farm layout optimization with landowner modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Le; MacDonald, Erin
This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under twomore » land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.« less
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
2014-01-01
The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated. PMID:24672351
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
NASA Astrophysics Data System (ADS)
Olivia, G.; Santoso, A.; Prayogo, D. N.
2017-11-01
Nowadays, the level of competition between supply chains is getting tighter and a good coordination system between supply chains members is very crucial in solving the issue. This paper focused on a model development of coordination system between single supplier and buyers in a supply chain as a solution. Proposed optimization model was designed to determine the optimal number of deliveries from a supplier to buyers in order to minimize the total cost over a planning horizon. Components of the total supply chain cost consist of transportation costs, handling costs of supplier and buyers and also stock out costs. In the proposed optimization model, the supplier can supply various types of items to retailers whose item demand patterns are probabilistic. Sensitivity analysis of the proposed model was conducted to test the effect of changes in transport costs, handling costs and production capacities of the supplier. The results of the sensitivity analysis showed a significant influence on the changes in the transportation cost, handling costs and production capacity to the decisions of the optimal numbers of product delivery for each item to the buyers.
NASA Astrophysics Data System (ADS)
Wahyuda; Santosa, Budi; Rusdiansyah, Ahmad
2018-04-01
Deregulation of the electricity market requires coordination between parties to synchronize the optimization on the production side (power station) and the transport side (transmission). Electricity supply chain presented in this article is designed to facilitate the coordination between the parties. Generally, the production side is optimized with price based dynamic economic dispatch (PBDED) model, while the transmission side is optimized with Multi-echelon distribution model. Both sides optimization are done separately. This article proposes a joint model of PBDED and multi-echelon distribution for the combined optimization of production and transmission. This combined optimization is important because changes in electricity demand on the customer side will cause changes to the production side that automatically also alter the transmission path. The transmission will cause two cost components. First, the cost of losses. Second, the cost of using the transmission network (wheeling transaction). Costs due to losses are calculated based on ohmic losses, while the cost of using transmission lines using the MW - mile method. As a result, this method is able to provide best allocation analysis for electrical transactions, as well as emission levels in power generation and cost analysis. As for the calculation of transmission costs, the Reverse MW-mile method produces a cheaper cost than the Absolute MW-mile method
Optimal synthesis and design of the number of cycles in the leaching process for surimi production.
Reinheimer, M Agustina; Scenna, Nicolás J; Mussati, Sergio F
2016-12-01
Water consumption required during the leaching stage in the surimi manufacturing process strongly depends on the design and the number and size of stages connected in series for the soluble protein extraction target, and it is considered as the main contributor to the operating costs. Therefore, the optimal synthesis and design of the leaching stage is essential to minimize the total annual cost. In this study, a mathematical optimization model for the optimal design of the leaching operation is presented. Precisely, a detailed Mixed Integer Nonlinear Programming (MINLP) model including operating and geometric constraints was developed based on our previous optimization model (NLP model). Aspects about quality, water consumption and main operating parameters were considered. The minimization of total annual costs, which considered a trade-off between investment and operating costs, led to an optimal solution with lesser number of stages (2 instead of 3 stages) and higher volumes of the leaching tanks comparing with previous results. An analysis was performed in order to investigate how the optimal solution was influenced by the variations of the unitary cost of fresh water, waste treatment and capital investment.
New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program
NASA Technical Reports Server (NTRS)
Strain, D.; Levy, R.
1986-01-01
The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.
Renewable Energy Resources Portfolio Optimization in the Presence of Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behboodi, Sahand; Chassin, David P.; Crawford, Curran
In this paper we introduce a simple cost model of renewable integration and demand response that can be used to determine the optimal mix of generation and demand response resources. The model includes production cost, demand elasticity, uncertainty costs, capacity expansion costs, retirement and mothballing costs, and wind variability impacts to determine the hourly cost and revenue of electricity delivery. The model is tested on the 2024 planning case for British Columbia and we find that cost is minimized with about 31% renewable generation. We also find that demand responsive does not have a significant impact on cost at themore » hourly level. The results suggest that the optimal level of renewable resource is not sensitive to a carbon tax or demand elasticity, but it is highly sensitive to the renewable resource installation cost.« less
Data on cost-optimal Nearly Zero Energy Buildings (NZEBs) across Europe.
D'Agostino, Delia; Parker, Danny
2018-04-01
This data article refers to the research paper A model for the cost-optimal design of Nearly Zero Energy Buildings (NZEBs) in representative climates across Europe [1]. The reported data deal with the design optimization of a residential building prototype located in representative European locations. The study focus on the research of cost-optimal choices and efficiency measures in new buildings depending on the climate. The data linked within this article relate to the modelled building energy consumption, renewable production, potential energy savings, and costs. Data allow to visualize energy consumption before and after the optimization, selected efficiency measures, costs and renewable production. The reduction of electricity and natural gas consumption towards the NZEB target can be visualized together with incremental and cumulative costs in each location. Further data is available about building geometry, costs, CO 2 emissions, envelope, materials, lighting, appliances and systems.
Replica Approach for Minimal Investment Risk with Cost
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-06-01
In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.
Advanced Structural Optimization Under Consideration of Cost Tracking
NASA Astrophysics Data System (ADS)
Zell, D.; Link, T.; Bickelmaier, S.; Albinger, J.; Weikert, S.; Cremaschi, F.; Wiegand, A.
2014-06-01
In order to improve the design process of launcher configurations in the early development phase, the software Multidisciplinary Optimization (MDO) was developed. The tool combines different efficient software tools such as Optimal Design Investigations (ODIN) for structural optimizations, Aerospace Trajectory Optimization Software (ASTOS) for trajectory and vehicle design optimization for a defined payload and mission.The present paper focuses to the integration and validation of ODIN. ODIN enables the user to optimize typical axis-symmetric structures by means of sizing the stiffening designs concerning strength and stability while minimizing the structural mass. In addition a fully automatic finite element model (FEM) generator module creates ready-to-run FEM models of a complete stage or launcher assembly.Cost tracking respectively future improvements concerning cost optimization are indicated.
NASA Astrophysics Data System (ADS)
Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.
2017-05-01
In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
Mathematical model for dynamic cell formation in fast fashion apparel manufacturing stage
NASA Astrophysics Data System (ADS)
Perera, Gayathri; Ratnayake, Vijitha
2018-05-01
This paper presents a mathematical programming model for dynamic cell formation to minimize changeover-related costs (i.e., machine relocation costs and machine setup cost) and inter-cell material handling cost to cope with the volatile production environments in apparel manufacturing industry. The model is formulated through findings of a comprehensive literature review. Developed model is validated based on data collected from three different factories in apparel industry, manufacturing fast fashion products. A program code is developed using Lingo 16.0 software package to generate optimal cells for developed model and to determine the possible cost-saving percentage when the existing layouts used in three factories are replaced by generated optimal cells. The optimal cells generated by developed mathematical model result in significant cost saving when compared with existing product layouts used in production/assembly department of selected factories in apparel industry. The developed model can be considered as effective in minimizing the considered cost terms in dynamic production environment of fast fashion apparel manufacturing industry. Findings of this paper can be used for further researches on minimizing the changeover-related costs in fast fashion apparel production stage.
Belciug, Smaranda; Gorunescu, Florin
2016-03-01
Explore how efficient intelligent decision support systems, both easily understandable and straightforwardly implemented, can help modern hospital managers to optimize both bed occupancy and utilization costs. This paper proposes a hybrid genetic algorithm-queuing multi-compartment model for the patient flow in hospitals. A finite capacity queuing model with phase-type service distribution is combined with a compartmental model, and an associated cost model is set up. An evolutionary-based approach is used for enhancing the ability to optimize both bed management and associated costs. In addition, a "What-if analysis" shows how changing the model parameters could improve performance while controlling costs. The study uses bed-occupancy data collected at the Department of Geriatric Medicine - St. George's Hospital, London, period 1969-1984, and January 2000. The hybrid model revealed that a bed-occupancy exceeding 91%, implying a patient rejection rate around 1.1%, can be carried out with 159 beds plus 8 unstaffed beds. The same holding and penalty costs, but significantly different bed allocations (156 vs. 184 staffed beds, and 8 vs. 9 unstaffed beds, respectively) will result in significantly different costs (£755 vs. £1172). Moreover, once the arrival rate exceeds 7 patient/day, the costs associated to the finite capacity system become significantly smaller than those associated to an Erlang B queuing model (£134 vs. £947). Encoding the whole information provided by both the queuing system and the cost model through chromosomes, the genetic algorithm represents an efficient tool in optimizing the bed allocation and associated costs. The methodology can be extended to different medical departments with minor modifications in structure and parameterization. Copyright © 2016 Elsevier B.V. All rights reserved.
Manual of phosphoric acid fuel cell power plant optimization model and computer program
NASA Technical Reports Server (NTRS)
Lu, C. Y.; Alkasab, K. A.
1984-01-01
An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.
Optimizing water purchases for an Environmental Water Account
NASA Astrophysics Data System (ADS)
Lund, J. R.; Hollinshead, S. P.
2005-12-01
State and federal agencies in California have established an Environmental Water Account (EWA) to buy water to protect endangered fish in the San Francisco Bay/ Sacramento-San Joaquin Delta Estuary. This paper presents a three-stage probabilistic optimization model that identifies least-cost strategies for purchasing water for the EWA given hydrologic, operational, and biological uncertainties. This approach minimizes the expected cost of long-term, spot, and option water purchases to meet uncertain flow dedications for fish. The model prescribes the location, timing, and type of optimal water purchases and can illustrate how least-cost strategies change with hydrologic, operational, biological, and cost inputs. Details of the optimization model's application to California's EWA are provided with a discussion of its utility for strategic planning and policy purposes. Limitations in and sensitivity analysis of the model's representation of EWA operations are discussed, as are operational and research recommendations.
Integrated strategic and tactical biomass-biofuel supply chain optimization.
Lin, Tao; Rodríguez, Luis F; Shastri, Yogendra N; Hansen, Alan C; Ting, K C
2014-03-01
To ensure effective biomass feedstock provision for large-scale biofuel production, an integrated biomass supply chain optimization model was developed to minimize annual biomass-ethanol production costs by optimizing both strategic and tactical planning decisions simultaneously. The mixed integer linear programming model optimizes the activities range from biomass harvesting, packing, in-field transportation, stacking, transportation, preprocessing, and storage, to ethanol production and distribution. The numbers, locations, and capacities of facilities as well as biomass and ethanol distribution patterns are key strategic decisions; while biomass production, delivery, and operating schedules and inventory monitoring are key tactical decisions. The model was implemented to study Miscanthus-ethanol supply chain in Illinois. The base case results showed unit Miscanthus-ethanol production costs were $0.72L(-1) of ethanol. Biorefinery related costs accounts for 62% of the total costs, followed by biomass procurement costs. Sensitivity analysis showed that a 50% reduction in biomass yield would increase unit production costs by 11%. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimization of A(2)O BNR processes using ASM and EAWAG Bio-P models: model performance.
El Shorbagy, Walid E; Radif, Nawras N; Droste, Ronald L
2013-12-01
This paper presents the performance of an optimization model for a biological nutrient removal (BNR) system using the anaerobic-anoxic-oxic (A(2)O) process. The formulated model simulates removal of organics, nitrogen, and phosphorus using a reduced International Water Association (IWA) Activated Sludge Model #3 (ASM3) model and a Swiss Federal Institute for Environmental Science and Technology (EAWAG) Bio-P module. Optimal sizing is attained considering capital and operational costs. Process performance is evaluated against the effect of influent conditions, effluent limits, and selected parameters of various optimal solutions with the following results: an increase of influent temperature from 10 degrees C to 25 degrees C decreases the annual cost by about 8.5%, an increase of influent flow from 500 to 2500 m(3)/h triples the annual cost, the A(2)O BNR system is more sensitive to variations in influent ammonia than phosphorus concentration and the maximum growth rate of autotrophic biomass was the most sensitive kinetic parameter in the optimization model.
NASA Astrophysics Data System (ADS)
Yang, Y.; Chui, T. F. M.
2016-12-01
Green infrastructure (GI) is identified as sustainable and environmentally friendly alternatives to the conventional grey stormwater infrastructure. Commonly used GI (e.g. green roof, bioretention, porous pavement) can provide multifunctional benefits, e.g. mitigation of urban heat island effects, improvements in air quality. Therefore, to optimize the design of GI and grey drainage infrastructure, it is essential to account for their benefits together with the costs. In this study, a comprehensive simulation-optimization modelling framework that considers the economic and hydro-environmental aspects of GI and grey infrastructure for small urban catchment applications is developed. Several modelling tools (i.e., EPA SWMM model, the WERF BMP and LID Whole Life Cycle Cost Modelling Tools) and optimization solvers are coupled together to assess the life-cycle cost-effectiveness of GI and grey infrastructure, and to further develop optimal stormwater drainage solutions. A typical residential lot in New York City is examined as a case study. The life-cycle cost-effectiveness of various GI and grey infrastructure are first examined at different investment levels. The results together with the catchment parameters are then provided to the optimization solvers, to derive the optimal investment and contributing area of each type of the stormwater controls. The relationship between the investment and optimized environmental benefit is found to be nonlinear. The optimized drainage solutions demonstrate that grey infrastructure is preferred at low total investments while more GI should be adopted at high investments. The sensitivity of the optimized solutions to the prices the stormwater controls is evaluated and is found to be highly associated with their utilizations in the base optimization case. The overall simulation-optimization framework can be easily applied to other sites world-wide, and to be further developed into powerful decision support systems.
Reactive Power Pricing Model Considering the Randomness of Wind Power Output
NASA Astrophysics Data System (ADS)
Dai, Zhong; Wu, Zhou
2018-01-01
With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.
Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System
NASA Astrophysics Data System (ADS)
Huang, Long; Feng, Xiao; Chu, Khim H.
2010-11-01
Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.
NASA Astrophysics Data System (ADS)
Shah, Rahul H.
Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.
Design optimization of space launch vehicles using a genetic algorithm
NASA Astrophysics Data System (ADS)
Bayley, Douglas James
The United States Air Force (USAF) continues to have a need for assured access to space. In addition to flexible and responsive spacelift, a reduction in the cost per launch of space launch vehicles is also desirable. For this purpose, an investigation of the design optimization of space launch vehicles has been conducted. Using a suite of custom codes, the performance aspects of an entire space launch vehicle were analyzed. A genetic algorithm (GA) was employed to optimize the design of the space launch vehicle. A cost model was incorporated into the optimization process with the goal of minimizing the overall vehicle cost. The other goals of the design optimization included obtaining the proper altitude and velocity to achieve a low-Earth orbit. Specific mission parameters that are particular to USAF space endeavors were specified at the start of the design optimization process. Solid propellant motors, liquid fueled rockets, and air-launched systems in various configurations provided the propulsion systems for two, three and four-stage launch vehicles. Mass properties models, an aerodynamics model, and a six-degree-of-freedom (6DOF) flight dynamics simulator were all used to model the system. The results show the feasibility of this method in designing launch vehicles that meet mission requirements. Comparisons to existing real world systems provide the validation for the physical system models. However, the ability to obtain a truly minimized cost was elusive. The cost model uses an industry standard approach, however, validation of this portion of the model was challenging due to the proprietary nature of cost figures and due to the dependence of many existing systems on surplus hardware.
Optimal inventories for overhaul of repairable redundant systems - A Markov decision model
NASA Technical Reports Server (NTRS)
Schaefer, M. K.
1984-01-01
A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.
Hannan, M A; Akhtar, Mahmuda; Begum, R A; Basri, H; Hussain, A; Scavino, Edgar
2018-01-01
Waste collection widely depends on the route optimization problem that involves a large amount of expenditure in terms of capital, labor, and variable operational costs. Thus, the more waste collection route is optimized, the more reduction in different costs and environmental effect will be. This study proposes a modified particle swarm optimization (PSO) algorithm in a capacitated vehicle-routing problem (CVRP) model to determine the best waste collection and route optimization solutions. In this study, threshold waste level (TWL) and scheduling concepts are applied in the PSO-based CVRP model under different datasets. The obtained results from different datasets show that the proposed algorithmic CVRP model provides the best waste collection and route optimization in terms of travel distance, total waste, waste collection efficiency, and tightness at 70-75% of TWL. The obtained results for 1 week scheduling show that 70% of TWL performs better than all node consideration in terms of collected waste, distance, tightness, efficiency, fuel consumption, and cost. The proposed optimized model can serve as a valuable tool for waste collection and route optimization toward reducing socioeconomic and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cha, E; Bar, D; Hertl, J A; Tauer, L W; Bennett, G; González, R N; Schukken, Y H; Welcome, F L; Gröhn, Y T
2011-09-01
The objective of this study was to estimate the cost of 3 different types of clinical mastitis (CM) (caused by gram-positive bacteria, gram-negative bacteria, and other organisms) at the individual cow level and thereby identify the economically optimal management decision for each type of mastitis. We made modifications to an existing dynamic optimization and simulation model, studying the effects of various factors (incidence of CM, milk loss, pregnancy rate, and treatment cost) on the cost of different types of CM. The average costs per case (US$) of gram-positive, gram-negative, and other CM were $133.73, $211.03, and $95.31, respectively. This model provided a more informed decision-making process in CM management for optimal economic profitability and determined that 93.1% of gram-positive CM cases, 93.1% of gram-negative CM cases, and 94.6% of other CM cases should be treated. The main contributor to the total cost per case was treatment cost for gram-positive CM (51.5% of the total cost per case), milk loss for gram-negative CM (72.4%), and treatment cost for other CM (49.2%). The model affords versatility as it allows for parameters such as production costs, economic values, and disease frequencies to be altered. Therefore, cost estimates are the direct outcome of the farm-specific parameters entered into the model. Thus, this model can provide farmers economically optimal guidelines specific to their individual cows suffering from different types of CM. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Optimal Path Determination for Flying Vehicle to Search an Object
NASA Astrophysics Data System (ADS)
Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.
2018-01-01
In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
Analysis of an inventory model for both linearly decreasing demand and holding cost
NASA Astrophysics Data System (ADS)
Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.
2016-03-01
This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.
Cost Optimization Model for Business Applications in Virtualized Grid Environments
NASA Astrophysics Data System (ADS)
Strebel, Jörg
The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.
Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey Dewayne
2004-01-01
The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.
NASA Astrophysics Data System (ADS)
Mishra, Vinod Kumar
2017-09-01
In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.
Dynamic malware containment under an epidemic model with alert
NASA Astrophysics Data System (ADS)
Zhang, Tianrui; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan
2017-03-01
Alerting at the early stage of malware invasion turns out to be an important complement to malware detection and elimination. This paper addresses the issue of how to dynamically contain the prevalence of malware at a lower cost, provided alerting is feasible. A controlled epidemic model with alert is established, and an optimal control problem based on the epidemic model is formulated. The optimality system for the optimal control problem is derived. The structure of an optimal control for the proposed optimal control problem is characterized under some conditions. Numerical examples show that the cost-efficiency of an optimal control strategy can be enhanced by adjusting the upper and lower bounds on admissible controls.
NASA Astrophysics Data System (ADS)
Dong, Haibin
2017-04-01
In this paper, a model is established to find the optimal shape, size and merging pattern of the toll plaza. The main work is how to take the aspects such as the accident prevention, throughput and cost into consideration to make the model of the toll plaza optimal. By analyzing the match of the number of tollbooths (B) and travel lanes (L) considering safety and cost, the optimal toll plaza model is established when the traffic flow is given.
Optimization of joint energy micro-grid with cold storage
NASA Astrophysics Data System (ADS)
Xu, Bin; Luo, Simin; Tian, Yan; Chen, Xianda; Xiong, Botao; Zhou, Bowen
2018-02-01
To accommodate distributed photovoltaic (PV) curtailment, to make full use of the joint energy micro-grid with cold storage, and to reduce the high operating costs, the economic dispatch of joint energy micro-grid load is particularly important. Considering the different prices during the peak and valley durations, an optimization model is established, which takes the minimum production costs and PV curtailment fluctuations as the objectives. Linear weighted sum method and genetic-taboo Particle Swarm Optimization (PSO) algorithm are used to solve the optimization model, to obtain optimal power supply output. Taking the garlic market in Henan as an example, the simulation results show that considering distributed PV and different prices in different time durations, the optimization strategies are able to reduce the operating costs and accommodate PV power efficiently.
Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A
2006-01-01
Patients with end-stage renal disease (ESRD) require dialysis to maintain survival. The optimal timing of dialysis initiation in terms of cost-effectiveness has not been established. We developed a simulation model of individuals progressing towards ESRD and requiring dialysis. It can be used to analyze dialysis strategies and scenarios. It was embedded in an optimization frame worked to derive improved strategies. Actual (historical) and simulated survival curves and hospitalization rates were virtually indistinguishable. The model overestimated transplantation costs (10%) but it was related to confounding by Medicare coverage. To assess the model's robustness, we examined several dialysis strategies while input parameters were perturbed. Under all 38 scenarios, relative rankings remained unchanged. An improved policy for a hypothetical patient was derived using an optimization algorithm. The model produces reliable results and is robust. It enables the cost-effectiveness analysis of dialysis strategies.
Open pit mining profit maximization considering selling stage and waste rehabilitation cost
NASA Astrophysics Data System (ADS)
Muttaqin, B. I. A.; Rosyidi, C. N.
2017-11-01
In open pit mining activities, determination of the cut-off grade becomes crucial for the company since the cut-off grade affects how much profit will be earned for the mining company. In this study, we developed a cut-off grade determination mode for the open pit mining industry considering the cost of mining, waste removal (rehabilitation) cost, processing cost, fixed cost, and selling stage cost. The main goal of this study is to develop a model of cut-off grade determination to get the maximum total profit. Secondly, this study is also developed to observe the model of sensitivity based on changes in the cost components. The optimization results show that the models can help mining company managers to determine the optimal cut-off grade and also estimate how much profit that can be earned by the mining company. To illustrate the application of the models, a numerical example and a set of sensitivity analysis are presented. From the results of sensitivity analysis, we conclude that the changes in the sales price greatly affects the optimal cut-off value and the total profit.
Balancing building and maintenance costs in growing transport networks
NASA Astrophysics Data System (ADS)
Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco
2017-09-01
The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.
A New Distributed Optimization for Community Microgrids Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starke, Michael R; Tomsovic, Kevin
This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling,more » energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.« less
Optimal short-range trajectories for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, G.L.; Erzberger, H.
1982-12-01
An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less
The cost of different types of lameness in dairy cows calculated by dynamic programming.
Cha, E; Hertl, J A; Bar, D; Gröhn, Y T
2010-10-01
Traditionally, studies which placed a monetary value on the effect of lameness have calculated the costs at the herd level and rarely have they been specific to different types of lameness. These costs which have been calculated from former studies are not particularly useful for farmers in making economically optimal decisions depending on individual cow characteristics. The objective of this study was to calculate the cost of different types of lameness at the individual cow level and thereby identify the optimal management decision for each of three representative lameness diagnoses. This model would provide a more informed decision making process in lameness management for maximal economic profitability. We made modifications to an existing dynamic optimization and simulation model, studying the effects of various factors (incidence of lameness, milk loss, pregnancy rate and treatment cost) on the cost of different types of lameness. The average cost per case (US$) of sole ulcer, digital dermatitis and foot rot were 216.07, 132.96 and 120.70, respectively. It was recommended that 97.3% of foot rot cases, 95.5% of digital dermatitis cases and 92.3% of sole ulcer cases be treated. The main contributor to the total cost per case of sole ulcer was milk loss (38%), treatment cost for digital dermatitis (42%) and the effect of decreased fertility for foot rot (50%). This model affords versatility as it allows for parameters such as production costs, economic values and disease frequencies to be altered. Therefore, cost estimates are the direct outcome of the farm specific parameters entered into the model. Thus, this model can provide farmers economically optimal guidelines specific to their individual cows suffering from different types of lameness. Copyright © 2010 Elsevier B.V. All rights reserved.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
Rabotyagov, Sergey; Campbell, Todd; Valcu, Adriana; Gassman, Philip; Jha, Manoj; Schilling, Keith; Wolter, Calvin; Kling, Catherine
2012-12-09
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,(5,12,20)) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods(3,4,9,10,13-15,17-19,22,23,25). In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model(7) with a multiobjective evolutionary algorithm SPEA2(26), and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.
Optimality Principles for Model-Based Prediction of Human Gait
Ackermann, Marko; van den Bogert, Antonie J.
2010-01-01
Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient’s gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait. PMID:20074736
Chavez, Hernan; Castillo-Villar, Krystel; Webb, Erin
2017-08-01
Variability on the physical characteristics of feedstock has a relevant effect on the reactor’s reliability and operating cost. Most of the models developed to optimize biomass supply chains have failed to quantify the effect of biomass quality and preprocessing operations required to meet biomass specifications on overall cost and performance. The Integrated Biomass Supply Analysis and Logistics (IBSAL) model estimates the harvesting, collection, transportation, and storage cost while considering the stochastic behavior of the field-to-biorefinery supply chain. This paper proposes an IBSAL-SimMOpt (Simulation-based Multi-Objective Optimization) method for optimizing the biomass quality and costs associated with the efforts needed to meetmore » conversion technology specifications. The method is developed in two phases. For the first phase, a SimMOpt tool that interacts with the extended IBSAL is developed. For the second phase, the baseline IBSAL model is extended so that the cost for meeting and/or penalization for failing in meeting specifications are considered. The IBSAL-SimMOpt method is designed to optimize quality characteristics of biomass, cost related to activities intended to improve the quality of feedstock, and the penalization cost. A case study based on 1916 farms in Ontario, Canada is considered for testing the proposed method. Analysis of the results demonstrates that this method is able to find a high-quality set of non-dominated solutions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavez, Hernan; Castillo-Villar, Krystel; Webb, Erin
Variability on the physical characteristics of feedstock has a relevant effect on the reactor’s reliability and operating cost. Most of the models developed to optimize biomass supply chains have failed to quantify the effect of biomass quality and preprocessing operations required to meet biomass specifications on overall cost and performance. The Integrated Biomass Supply Analysis and Logistics (IBSAL) model estimates the harvesting, collection, transportation, and storage cost while considering the stochastic behavior of the field-to-biorefinery supply chain. This paper proposes an IBSAL-SimMOpt (Simulation-based Multi-Objective Optimization) method for optimizing the biomass quality and costs associated with the efforts needed to meetmore » conversion technology specifications. The method is developed in two phases. For the first phase, a SimMOpt tool that interacts with the extended IBSAL is developed. For the second phase, the baseline IBSAL model is extended so that the cost for meeting and/or penalization for failing in meeting specifications are considered. The IBSAL-SimMOpt method is designed to optimize quality characteristics of biomass, cost related to activities intended to improve the quality of feedstock, and the penalization cost. A case study based on 1916 farms in Ontario, Canada is considered for testing the proposed method. Analysis of the results demonstrates that this method is able to find a high-quality set of non-dominated solutions.« less
Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra
2013-03-01
Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Saavedra, Juan Alejandro
Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.
Routing and Scheduling Optimization Model of Sea Transportation
NASA Astrophysics Data System (ADS)
barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman
2018-01-01
This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.
2000-01-01
The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.
A modeling framework for optimal long-term care insurance purchase decisions in retirement planning.
Gupta, Aparna; Li, Lepeng
2004-05-01
The level of need and costs of obtaining long-term care (LTC) during retired life require that planning for it is an integral part of retirement planning. In this paper, we divide retirement planning into two phases, pre-retirement and post-retirement. On the basis of four interrelated models for health evolution, wealth evolution, LTC insurance premium and coverage, and LTC cost structure, a framework for optimal LTC insurance purchase decisions in the pre-retirement phase is developed. Optimal decisions are obtained by developing a trade-off between post-retirement LTC costs and LTC insurance premiums and coverage. Two-way branching models are used to model stochastic health events and asset returns. The resulting optimization problem is formulated as a dynamic programming problem. We compare the optimal decision under two insurance purchase scenarios: one assumes that insurance is purchased for good and other assumes it may be purchased, relinquished and re-purchased. Sensitivity analysis is performed for the retirement age.
Timothy M. Young; James H. Perdue; Andy Hartsell; Robert C. Abt; Donald Hodges; Timothy G. Rials
2009-01-01
Optimal locations for biomass facilities that use mill residues are identified for 13 southern U.S. states. The Biomass Site Assessment Tool (BioSAT) model is used to identify the top 20 locations for 13 southern U.S. states. The trucking cost model of BioSAT is used with Timber Mart South 2009 price data to estimate the total cost, average cost, and marginal costs for...
An integrated radar model solution for mission level performance and cost trades
NASA Astrophysics Data System (ADS)
Hodge, John; Duncan, Kerron; Zimmerman, Madeline; Drupp, Rob; Manno, Mike; Barrett, Donald; Smith, Amelia
2017-05-01
A fully integrated Mission-Level Radar model is in development as part of a multi-year effort under the Northrop Grumman Mission Systems (NGMS) sector's Model Based Engineering (MBE) initiative to digitally interconnect and unify previously separate performance and cost models. In 2016, an NGMS internal research and development (IR and D) funded multidisciplinary team integrated radio frequency (RF), power, control, size, weight, thermal, and cost models together using a commercial-off-the-shelf software, ModelCenter, for an Active Electronically Scanned Array (AESA) radar system. Each represented model was digitally connected with standard interfaces and unified to allow end-to-end mission system optimization and trade studies. The radar model was then linked to the Air Force's own mission modeling framework (AFSIM). The team first had to identify the necessary models, and with the aid of subject matter experts (SMEs) understand and document the inputs, outputs, and behaviors of the component models. This agile development process and collaboration enabled rapid integration of disparate models and the validation of their combined system performance. This MBE framework will allow NGMS to design systems more efficiently and affordably, optimize architectures, and provide increased value to the customer. The model integrates detailed component models that validate cost and performance at the physics level with high-level models that provide visualization of a platform mission. This connectivity of component to mission models allows hardware and software design solutions to be better optimized to meet mission needs, creating cost-optimal solutions for the customer, while reducing design cycle time through risk mitigation and early validation of design decisions.
A Perishable Inventory Model with Return
NASA Astrophysics Data System (ADS)
Setiawan, S. W.; Lesmono, D.; Limansyah, T.
2018-04-01
In this paper, we develop a mathematical model for a perishable inventory with return by assuming deterministic demand and inventory dependent demand. By inventory dependent demand, it means that demand at certain time depends on the available inventory at that time with certain rate. In dealing with perishable items, we should consider deteriorating rate factor that corresponds to the decreasing quality of goods. There are also costs involved in this model such as purchasing, ordering, holding, shortage (backordering) and returning costs. These costs compose the total costs in the model that we want to minimize. In the model we seek for the optimal return time and order quantity. We assume that after some period of time, called return time, perishable items can be returned to the supplier at some returning costs. The supplier will then replace them in the next delivery. Some numerical experiments are given to illustrate our model and sensitivity analysis is performed as well. We found that as the deteriorating rate increases, returning time becomes shorter, the optimal order quantity and total cost increases. When considering the inventory-dependent demand factor, we found that as this factor increases, assuming a certain deteriorating rate, returning time becomes shorter, optimal order quantity becomes larger and the total cost increases.
A dynamic model for costing disaster mitigation policies.
Altay, Nezih; Prasad, Sameer; Tata, Jasmine
2013-07-01
The optimal level of investment in mitigation strategies is usually difficult to ascertain in the context of disaster planning. This research develops a model to provide such direction by relying on cost of quality literature. This paper begins by introducing a static approach inspired by Joseph M. Juran's cost of quality management model (Juran, 1951) to demonstrate the non-linear trade-offs in disaster management expenditure. Next it presents a dynamic model that includes the impact of dynamic interactions of the changing level of risk, the cost of living, and the learning/investments that may alter over time. It illustrates that there is an optimal point that minimises the total cost of disaster management, and that this optimal point moves as governments learn from experience or as states get richer. It is hoped that the propositions contained herein will help policymakers to plan, evaluate, and justify voluntary disaster mitigation expenditures. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.
Optimal flight initiation distance.
Cooper, William E; Frederick, William G
2007-01-07
Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.
A multimodal logistics service network design with time windows and environmental concerns
Zhang, Dezhi; He, Runzhong; Wang, Zhongwei
2017-01-01
The design of a multimodal logistics service network with customer service time windows and environmental costs is an important and challenging issue. Accordingly, this work established a model to minimize the total cost of multimodal logistics service network design with time windows and environmental concerns. The proposed model incorporates CO2 emission costs to determine the optimal transportation mode combinations and investment selections for transfer nodes, which consider transport cost, transport time, carbon emission, and logistics service time window constraints. Furthermore, genetic and heuristic algorithms are proposed to set up the abovementioned optimal model. A numerical example is provided to validate the model and the abovementioned two algorithms. Then, comparisons of the performance of the two algorithms are provided. Finally, this work investigates the effects of the logistics service time windows and CO2 emission taxes on the optimal solution. Several important management insights are obtained. PMID:28934272
A multimodal logistics service network design with time windows and environmental concerns.
Zhang, Dezhi; He, Runzhong; Li, Shuangyan; Wang, Zhongwei
2017-01-01
The design of a multimodal logistics service network with customer service time windows and environmental costs is an important and challenging issue. Accordingly, this work established a model to minimize the total cost of multimodal logistics service network design with time windows and environmental concerns. The proposed model incorporates CO2 emission costs to determine the optimal transportation mode combinations and investment selections for transfer nodes, which consider transport cost, transport time, carbon emission, and logistics service time window constraints. Furthermore, genetic and heuristic algorithms are proposed to set up the abovementioned optimal model. A numerical example is provided to validate the model and the abovementioned two algorithms. Then, comparisons of the performance of the two algorithms are provided. Finally, this work investigates the effects of the logistics service time windows and CO2 emission taxes on the optimal solution. Several important management insights are obtained.
Launch Vehicle Propulsion Design with Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.
2005-01-01
The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.
Sperry, John S; Venturas, Martin D; Anderegg, William R L; Mencuccini, Maurizio; Mackay, D Scott; Wang, Yujie; Love, David M
2017-06-01
Stomatal regulation presumably evolved to optimize CO 2 for H 2 O exchange in response to changing conditions. If the optimization criterion can be readily measured or calculated, then stomatal responses can be efficiently modelled without recourse to empirical models or underlying mechanism. Previous efforts have been challenged by the lack of a transparent index for the cost of losing water. Yet it is accepted that stomata control water loss to avoid excessive loss of hydraulic conductance from cavitation and soil drying. Proximity to hydraulic failure and desiccation can represent the cost of water loss. If at any given instant, the stomatal aperture adjusts to maximize the instantaneous difference between photosynthetic gain and hydraulic cost, then a model can predict the trajectory of stomatal responses to changes in environment across time. Results of this optimization model are consistent with the widely used Ball-Berry-Leuning empirical model (r 2 > 0.99) across a wide range of vapour pressure deficits and ambient CO 2 concentrations for wet soil. The advantage of the optimization approach is the absence of empirical coefficients, applicability to dry as well as wet soil and prediction of plant hydraulic status along with gas exchange. © 2016 John Wiley & Sons Ltd.
Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.
Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W
2016-01-01
To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.
Decision science and cervical cancer.
Cantor, Scott B; Fahs, Marianne C; Mandelblatt, Jeanne S; Myers, Evan R; Sanders, Gillian D
2003-11-01
Mathematical modeling is an effective tool for guiding cervical cancer screening, diagnosis, and treatment decisions for patients and policymakers. This article describes the use of mathematical modeling as outlined in five presentations from the Decision Science and Cervical Cancer session of the Second International Conference on Cervical Cancer held at The University of Texas M. D. Anderson Cancer Center, April 11-14, 2002. The authors provide an overview of mathematical modeling, especially decision analysis and cost-effectiveness analysis, and examples of how it can be used for clinical decision making regarding the prevention, diagnosis, and treatment of cervical cancer. Included are applications as well as theory regarding decision science and cervical cancer. Mathematical modeling can answer such questions as the optimal frequency for screening, the optimal age to stop screening, and the optimal way to diagnose cervical cancer. Results from one mathematical model demonstrated that a vaccine against high-risk strains of human papillomavirus was a cost-effective use of resources, and discussion of another model demonstrated the importance of collecting direct non-health care costs and time costs for cost-effectiveness analysis. Research presented indicated that care must be taken when applying the results of population-wide, cost-effectiveness analyses to reduce health disparities. Mathematical modeling can encompass a variety of theoretical and applied issues regarding decision science and cervical cancer. The ultimate objective of using decision-analytic and cost-effectiveness models is to identify ways to improve women's health at an economically reasonable cost. Copyright 2003 American Cancer Society.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
NASA Technical Reports Server (NTRS)
Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.
1992-01-01
A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Modelling and Optimal Control of Typhoid Fever Disease with Cost-Effective Strategies.
Tilahun, Getachew Teshome; Makinde, Oluwole Daniel; Malonza, David
2017-01-01
We propose and analyze a compartmental nonlinear deterministic mathematical model for the typhoid fever outbreak and optimal control strategies in a community with varying population. The model is studied qualitatively using stability theory of differential equations and the basic reproductive number that represents the epidemic indicator is obtained from the largest eigenvalue of the next-generation matrix. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined. The model exhibits a forward transcritical bifurcation and the sensitivity analysis is performed. The optimal control problem is designed by applying Pontryagin maximum principle with three control strategies, namely, the prevention strategy through sanitation, proper hygiene, and vaccination; the treatment strategy through application of appropriate medicine; and the screening of the carriers. The cost functional accounts for the cost involved in prevention, screening, and treatment together with the total number of the infected persons averted. Numerical results for the typhoid outbreak dynamics and its optimal control revealed that a combination of prevention and treatment is the best cost-effective strategy to eradicate the disease.
NASA Astrophysics Data System (ADS)
Veerakamolmal, Pitipong; Lee, Yung-Joon; Fasano, J. P.; Hale, Rhea; Jacques, Mary
2002-02-01
In recent years, there has been increased focus by regulators, manufacturers, and consumers on the issue of product end of life management for electronics. This paper presents an overview of a conceptual study designed to examine the costs and benefits of several different Product Take Back (PTB) scenarios for used electronics equipment. The study utilized a reverse logistics supply chain model to examine the effects of several different factors in PTB programs. The model was done using the IBM supply chain optimization tool known as WIT (Watson Implosion Technology). Using the WIT tool, we were able to determine a theoretical optimal cost scenario for PTB programs. The study was designed to assist IBM internally in determining theoretical optimal Product Take Back program models and determining potential incentives for increasing participation rates.
System design optimization for stand-alone photovoltaic systems sizing by using superstructure model
NASA Astrophysics Data System (ADS)
Azau, M. A. M.; Jaafar, S.; Samsudin, K.
2013-06-01
Although the photovoltaic (PV) systems have been increasingly installed as an alternative and renewable green power generation, the initial set up cost, maintenance cost and equipment mismatch are some of the key issues that slows down the installation in small household. This paper presents the design optimization of stand-alone photovoltaic systems using superstructure model where all possible types of technology of the equipment are captured and life cycle cost analysis is formulated as a mixed integer programming (MIP). A model for investment planning of power generation and long-term decision model are developed in order to help the system engineer to build a cost effective system.
Surrogate based wind farm layout optimization using manifold mapping
NASA Astrophysics Data System (ADS)
Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester
2016-09-01
High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.
Cui, Borui; Gao, Dian-ce; Xiao, Fu; ...
2016-12-23
This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Borui; Gao, Dian-ce; Xiao, Fu
This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less
Mori, Koichiro
2009-02-01
The purpose of this short article is to set static and dynamic models for optimal floodplain management and to compare policy implications from the models. River floodplains are important multiple resources in that they provide various ecosystem services. It is fundamentally significant to consider environmental externalities that accrue from ecosystem services of natural floodplains. There is an interesting gap between static and dynamic models about policy implications for floodplain management, although they are based on the same assumptions. Essentially, we can derive the same optimal conditions, which imply that the marginal benefits must equal the sum of the marginal costs and the social external costs related to ecosystem services. Thus, we have to internalise the external costs by market-based policies. In this respect, market-based policies seem to be effective in a static model. However, they are not sufficient in the context of a dynamic model because the optimal steady state turns out to be unstable. Based on a dynamic model, we need more coercive regulation policies.
Linear versus quadratic portfolio optimization model with transaction cost
NASA Astrophysics Data System (ADS)
Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah
2014-06-01
Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.
Estimation of optimal educational cost per medical student.
Yang, Eunbae B; Lee, Seunghee
2009-09-01
This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.
Mathematical model of highways network optimization
NASA Astrophysics Data System (ADS)
Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.
2017-12-01
The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.
Exploring How Technology Growth Limits Impact Optimal Carbon dioxide Mitigation Pathways
Energy system optimization models prescribe the optimal mix of technologies and fuels for meeting energy demands over a time horizon, subject to energy supplies, demands, and other constraints. When optimizing, these models will, to the extent allowed, favor the least cost combin...
NASA Astrophysics Data System (ADS)
Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.
2017-03-01
Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.
Modeling OPC complexity for design for manufacturability
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong
2005-11-01
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data from OPC and mask data preparation runs, we build models of FC as function of OPC tolerances and layout parameters.
Optimal design of the satellite constellation arrangement reconfiguration process
NASA Astrophysics Data System (ADS)
Fakoor, Mahdi; Bakhtiari, Majid; Soleymani, Mahshid
2016-08-01
In this article, a novel approach is introduced for the satellite constellation reconfiguration based on Lambert's theorem. Some critical problems are raised in reconfiguration phase, such as overall fuel cost minimization, collision avoidance between the satellites on the final orbital pattern, and necessary maneuvers for the satellites in order to be deployed in the desired position on the target constellation. To implement the reconfiguration phase of the satellite constellation arrangement at minimal cost, the hybrid Invasive Weed Optimization/Particle Swarm Optimization (IWO/PSO) algorithm is used to design sub-optimal transfer orbits for the satellites existing in the constellation. Also, the dynamic model of the problem will be modeled in such a way that, optimal assignment of the satellites to the initial and target orbits and optimal orbital transfer are combined in one step. Finally, we claim that our presented idea i.e. coupled non-simultaneous flight of satellites from the initial orbital pattern will lead to minimal cost. The obtained results show that by employing the presented method, the cost of reconfiguration process is reduced obviously.
Brown, Zachary S.; Dickinson, Katherine L.; Kramer, Randall A.
2014-01-01
The evolutionary dynamics of insecticide resistance in harmful arthropods has economic implications, not only for the control of agricultural pests (as has been well studied), but also for the control of disease vectors, such as malaria-transmitting Anopheles mosquitoes. Previous economic work on insecticide resistance illustrates the policy relevance of knowing whether insecticide resistance mutations involve fitness costs. Using a theoretical model, this article investigates economically optimal strategies for controlling malaria-transmitting mosquitoes when there is the potential for mosquitoes to evolve resistance to insecticides. Consistent with previous literature, we find that fitness costs are a key element in the computation of economically optimal resistance management strategies. Additionally, our models indicate that different biological mechanisms underlying these fitness costs (e.g., increased adult mortality and/or decreased fecundity) can significantly alter economically optimal resistance management strategies. PMID:23448053
NASA Astrophysics Data System (ADS)
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
Finding the optimal lengths for three branches at a junction.
Woldenberg, M J; Horsfield, K
1983-09-21
This paper presents an exact analytical solution to the problem of locating the junction point between three branches so that the sum of the total costs of the branches is minimized. When the cost per unit length of each branch is known the angles between each pair of branches can be deduced following reasoning first introduced to biology by Murray. Assuming the outer ends of each branch are fixed, the location of the junction and the length of each branch are then deduced using plane geometry and trigonometry. The model has applications in determining the optimal cost of a branch or branches at a junction. Comparing the optimal to the actual cost of a junction is a new way to compare cost models for goodness of fit to actual junction geometry. It is an unambiguous measure and is superior to comparing observed and optimal angles between each daughter and the parent branch. We present data for 199 junctions in the pulmonary arteries of two human lungs. For the branches at each junction we calculated the best fitting value of x from the relationship that flow alpha (radius)x. We found that the value of x determined whether a junction was best fitted by a surface, volume, drag or power minimization model. While economy of explanation casts doubt that four models operate simultaneously, we found that optimality may still operate, since the angle to the major daughter is less than the angle to the minor daughter. Perhaps optimality combined with a space filling branching pattern governs the branching geometry of the pulmonary artery.
Adopting epidemic model to optimize medication and surgical intervention of excess weight
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2017-01-01
We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.
Optimization of power systems with voltage security constraints
NASA Astrophysics Data System (ADS)
Rosehart, William Daniel
As open access market principles are applied to power systems, significant changes in their operation and control are occurring. In the new marketplace, power systems are operating under higher loading conditions as market influences demand greater attention to operating cost versus stability margins. Since stability continues to be a basic requirement in the operation of any power system, new tools are being considered to analyze the effect of stability on the operating cost of the system, so that system stability can be incorporated into the costs of operating the system. In this thesis, new optimal power flow (OPF) formulations are proposed based on multi-objective methodologies to optimize active and reactive power dispatch while maximizing voltage security in power systems. The effects of minimizing operating costs, minimizing reactive power generation and/or maximizing voltage stability margins are analyzed. Results obtained using the proposed Voltage Stability Constrained OPF formulations are compared and analyzed to suggest possible ways of costing voltage security in power systems. When considering voltage stability margins the importance of system modeling becomes critical, since it has been demonstrated, based on bifurcation analysis, that modeling can have a significant effect of the behavior of power systems, especially at high loading levels. Therefore, this thesis also examines the effects of detailed generator models and several exponential load models. Furthermore, because of its influence on voltage stability, a Static Var Compensator model is also incorporated into the optimization problems.
NASA Astrophysics Data System (ADS)
Pando, V.; García-Laguna, J.; San-José, L. A.
2012-11-01
In this article, we integrate a non-linear holding cost with a stock-dependent demand rate in a maximising profit per unit time model, extending several inventory models studied by other authors. After giving the mathematical formulation of the inventory system, we prove the existence and uniqueness of the optimal policy. Relying on this result, we can obtain the optimal solution using different numerical algorithms. Moreover, we provide a necessary and sufficient condition to determine whether a system is profitable, and we establish a rule to check when a given order quantity is the optimal lot size of the inventory model. The results are illustrated through numerical examples and the sensitivity of the optimal solution with respect to changes in some values of the parameters is assessed.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-10-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-01-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245
Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.
Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G
2015-11-17
Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.
NASA Astrophysics Data System (ADS)
Kim, U.; Parker, J.
2016-12-01
Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.
NASA Technical Reports Server (NTRS)
Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.
1992-01-01
The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Stephanie A. Snyder; Keith D. Stockmann; Gaylord E. Morris
2012-01-01
The US Forest Service used contracted helicopter services as part of its wildfire suppression strategy. An optimization decision-modeling system was developed to assist in the contract selection process. Three contract award selection criteria were considered: cost per pound of delivered water, total contract cost, and quality ratings of the aircraft and vendors....
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
[Cost-effectiveness of breast cancer screening policies in Mexico].
Valencia-Mendoza, Atanacio; Sánchez-González, Gilberto; Bautista-Arredondo, Sergio; Torres-Mejía, Gabriela; Bertozzi, Stefano M
2009-01-01
Generate cost-effectiveness information to allow policy makers optimize breast cancer (BC) policy in Mexico. We constructed a Markov model that incorporates four interrelated processes of the disease: the natural history; detection using mammography; treatment; and other competing-causes mortality, according to which 13 different strategies were modeled. Strategies (starting age, % of coverage, frequency in years)= (48, 25, 2), (40, 50, 2) and (40, 50, 1) constituted the optimal method for expanding the BC program, yielding 75.3, 116.4 and 171.1 thousand pesos per life-year saved, respectively. The strategies included in the optimal method for expanding the program produce a cost per life-year saved of less than two times the GNP per capita and hence are cost-effective according to WHO Commission on Macroeconomics and Health criteria.
Interpreting cost of ownership for mix-and-match lithography
NASA Astrophysics Data System (ADS)
Levine, Alan L.; Bergendahl, Albert S.
1994-05-01
Cost of ownership modeling is a critical and emerging tool that provides significant insight into the ways to optimize device manufacturing costs. The development of a model to deal with a particular application, mix-and-match lithography, was performed in order to determine the level of cost savings and the optimum ways to create these savings. The use of sensitivity analysis with cost of ownership allows the user to make accurate trade-offs between technology and cost. The use and interpretation of the model results are described in this paper. Parameters analyzed include several manufacturing considerations -- depreciation, maintenance, engineering and operator labor, floorspace, resist, consumables and reticles. Inherent in this study is the ability to customize this analysis for a particular operating environment. Results demonstrate the clear advantages of a mix-and-match approach for three different operating environments. These case studies also demonstrate various methods to efficiently optimize cost savings strategies.
A probabilistic method for the estimation of residual risk in donated blood.
Bish, Ebru K; Ragavan, Prasanna K; Bish, Douglas R; Slonim, Anthony D; Stramer, Susan L
2014-10-01
The residual risk (RR) of transfusion-transmitted infections, including the human immunodeficiency virus and hepatitis B and C viruses, is typically estimated by the incidence[Formula: see text]window period model, which relies on the following restrictive assumptions: Each screening test, with probability 1, (1) detects an infected unit outside of the test's window period; (2) fails to detect an infected unit within the window period; and (3) correctly identifies an infection-free unit. These assumptions need not hold in practice due to random or systemic errors and individual variations in the window period. We develop a probability model that accurately estimates the RR by relaxing these assumptions, and quantify their impact using a published cost-effectiveness study and also within an optimization model. These assumptions lead to inaccurate estimates in cost-effectiveness studies and to sub-optimal solutions in the optimization model. The testing solution generated by the optimization model translates into fewer expected infections without an increase in the testing cost. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
NASA Astrophysics Data System (ADS)
Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan
2016-12-01
In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.
Heliostat cost optimization study
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus
2016-05-01
This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.
Sail Plan Configuration Optimization for a Modern Clipper Ship
NASA Astrophysics Data System (ADS)
Gerritsen, Margot; Doyle, Tyler; Iaccarino, Gianluca; Moin, Parviz
2002-11-01
We investigate the use of gradient-based and evolutionary algorithms for sail shape optimization. We present preliminary results for the optimization of sheeting angles for the rig of the future three-masted clipper yacht Maltese Falcon. This yacht will be equipped with square-rigged masts made up of yards of circular arc cross sections. This design is especially attractive for megayachts because it provides a large sail area while maintaining aerodynamic and structural efficiency. The rig remains almost rigid in a large range of wind conditions and therefore a simple geometrical model can be constructed without accounting for the true flying shape. The sheeting angle optimization studies are performed using both gradient-based cost function minimization and evolutionary algorithms. The fluid flow is modeled by the Reynolds-averaged Navier-Stokes equations with the Spallart-Allmaras turbulence model. Unstructured non-conforming grids are used to increase robustness and computational efficiency. The optimization process is automated by integrating the system components (geometry construction, grid generation, flow solver, force calculator, optimization). We compare the optimization results to those done previously by user-controlled parametric studies using simple cost functions and user intuition. We also investigate the effectiveness of various cost functions in the optimization (driving force maximization, ratio of driving force to heeling force maximization).
Pricing policy for declining demand using item preservation technology.
Khedlekar, Uttam Kumar; Shukla, Diwakar; Namdeo, Anubhav
2016-01-01
We have designed an inventory model for seasonal products in which deterioration can be controlled by item preservation technology investment. Demand for the product is considered price sensitive and decreases linearly. This study has shown that the profit is a concave function of optimal selling price, replenishment time and preservation cost parameter. We simultaneously determined the optimal selling price of the product, the replenishment cycle and the cost of item preservation technology. Additionally, this study has shown that there exists an optimal selling price and optimal preservation investment to maximize the profit for every business set-up. Finally, the model is illustrated by numerical examples and sensitive analysis of the optimal solution with respect to major parameters.
NASA Astrophysics Data System (ADS)
Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd
2018-04-01
Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-Taguchi System was proven as a powerful method of optimization that revealed the criticality of parameters. When subjected the method to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
Watershed Management Optimization Support Tool (WMOST) v2: Theoretical Documentation
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that evaluates the relative cost-effectiveness of management practices at the local or watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed c...
Optimization of municipal pressure pumping station layout and sewage pipe network design
NASA Astrophysics Data System (ADS)
Tian, Jiandong; Cheng, Jilin; Gong, Yi
2018-03-01
Accelerated urbanization places extraordinary demands on sewer networks; thus optimization research to improve the design of these systems has practical significance. In this article, a subsystem nonlinear programming model is developed to optimize pumping station layout and sewage pipe network design. The subsystem model is expanded into a large-scale complex nonlinear programming system model to find the minimum total annual cost of the pumping station and network of all pipe segments. A comparative analysis is conducted using the sewage network in Taizhou City, China, as an example. The proposed method demonstrated that significant cost savings could have been realized if the studied system had been optimized using the techniques described in this article. Therefore, the method has practical value for optimizing urban sewage projects and provides a reference for theoretical research on optimization of urban drainage pumping station layouts.
area, which includes work on whole building energy modeling, cost-based optimization, model accuracy optimization tool used to provide support for the Building America program's teams and energy efficiency goals Colorado graduate student exploring enhancements to building optimization in terms of robustness and speed
NASA Astrophysics Data System (ADS)
Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin
2015-11-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
NASA Astrophysics Data System (ADS)
Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.
2015-12-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
NASA Astrophysics Data System (ADS)
Wang, Wu; Huang, Wei; Zhang, Yongjun
2018-03-01
The grid-integration of Photovoltaic-Storage System brings some undefined factors to the network. In order to make full use of the adjusting ability of Photovoltaic-Storage System (PSS), this paper puts forward a reactive power optimization model, which are used to construct the objective function based on power loss and the device adjusting cost, including energy storage adjusting cost. By using Cataclysmic Genetic Algorithm to solve this optimization problem, and comparing with other optimization method, the result proved that: the method of dynamic extended reactive power optimization this article puts forward, can enhance the effect of reactive power optimization, including reducing power loss and device adjusting cost, meanwhile, it gives consideration to the safety of voltage.
NASA Astrophysics Data System (ADS)
Krugon, Seelam; Nagaraju, Dega
2017-05-01
This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.
A new approach to optimal selection of services in health care organizations.
Adolphson, D L; Baird, M L; Lawrence, K D
1991-01-01
A new reimbursement policy adopted by Medicare in 1983 caused financial difficulties for many hospitals and health care organizations. Several organizations responded to these difficulties by developing systems to carefully measure their costs of providing services. The purpose of such systems was to provide relevant information about the profitability of hospital services. This paper presents a new method of making hospital service selection decisions: it is based on an optimization model that avoids arbitrary cost allocations as a basis for computing the costs of offering a given service. The new method provides more reliable information about which services are profitable or unprofitable, and it provides an accurate measure of the degree to which a service is profitable or unprofitable. The new method also provides useful information about the sensitivity of the optimal decision to changes in costs and revenues. Specialized algorithms for the optimization model lead to very efficient implementation of the method, even for the largest health care organizations.
Cox, G; Beresford, N A; Alvarez-Farizo, B; Oughton, D; Kis, Z; Eged, K; Thørring, H; Hunt, J; Wright, S; Barnett, C L; Gil, J M; Howard, B J; Crout, N M J
2005-01-01
A spatially implemented model designed to assist the identification of optimal countermeasure strategies for radioactively contaminated regions is described. Collective and individual ingestion doses for people within the affected area are estimated together with collective exported ingestion dose. A range of countermeasures are incorporated within the model, and environmental restrictions have been included as appropriate. The model evaluates the effectiveness of a given combination of countermeasures through a cost function which balances the benefit obtained through the reduction in dose with the cost of implementation. The optimal countermeasure strategy is the combination of individual countermeasures (and when and where they are implemented) which gives the lowest value of the cost function. The model outputs should not be considered as definitive solutions, rather as interactive inputs to the decision making process. As a demonstration the model has been applied to a hypothetical scenario in Cumbria (UK). This scenario considered a published nuclear power plant accident scenario with a total deposition of 1.7x10(14), 1.2x10(13), 2.8x10(10) and 5.3x10(9)Bq for Cs-137, Sr-90, Pu-239/240 and Am-241, respectively. The model predicts that if no remediation measures were implemented the resulting collective dose would be approximately 36 000 person-Sv (predominantly from 137Cs) over a 10-year period post-deposition. The optimal countermeasure strategy is predicted to avert approximately 33 000 person-Sv at a cost of approximately 160 million pounds. The optimal strategy comprises a mixture of ploughing, AFCF (ammonium-ferric hexacyano-ferrate) administration, potassium fertiliser application, clean feeding of livestock and food restrictions. The model recommends specific areas within the contaminated area and time periods where these measures should be implemented.
Cost Efficiency in the University: A Departmental Evaluation Model
ERIC Educational Resources Information Center
Gimenez, Victor M.; Martinez, Jose Luis
2006-01-01
This article presents a model for the analysis of cost efficiency within the framework of data envelopment analysis models. It calculates the cost excess, separating a unit of production from its optimal or frontier levels, and, at the same time, breaks these excesses down into three explanatory factors: (a) technical inefficiency, which depends…
NASA Astrophysics Data System (ADS)
Gunes, Ersin Fatih
Turkey is located between Europe, which has increasing demand for natural gas and the geographies of Middle East, Asia and Russia, which have rich and strong natural gas supply. Because of the geographical location, Turkey has strategic importance according to energy sources. To supply this demand, a pipeline network configuration with the optimal and efficient lengths, pressures, diameters and number of compressor stations is extremely needed. Because, Turkey has a currently working and constructed network topology, obtaining an optimal configuration of the pipelines, including an optimal number of compressor stations with optimal locations, is the focus of this study. Identifying a network design with lowest costs is important because of the high maintenance and set-up costs. The quantity of compressor stations, the pipeline segments' lengths, the diameter sizes and pressures at compressor stations, are considered to be decision variables in this study. Two existing optimization models were selected and applied to the case study of Turkey. Because of the fixed cost of investment, both models are formulated as mixed integer nonlinear programs, which require branch and bound combined with the nonlinear programming solution methods. The differences between these two models are related to some factors that can affect the network system of natural gas such as wall thickness, material balance compressor isentropic head and amount of gas to be delivered. The results obtained by these two techniques are compared with each other and with the current system. Major differences between results are costs, pressures and flow rates. These solution techniques are able to find a solution with minimum cost for each model both of which are less than the current cost of the system while satisfying all the constraints on diameter, length, flow rate and pressure. These results give the big picture of an ideal configuration for the future state network for the country of Turkey.
Modified allocation capacitated planning model in blood supply chain management
NASA Astrophysics Data System (ADS)
Mansur, A.; Vanany, I.; Arvitrida, N. I.
2018-04-01
Blood supply chain management (BSCM) is a complex process management that involves many cooperating stakeholders. BSCM involves four echelon processes, which are blood collection or procurement, production, inventory, and distribution. This research develops an optimization model of blood distribution planning. The efficiency of decentralization and centralization policies in a blood distribution chain are compared, by optimizing the amount of blood delivered from a blood center to a blood bank. This model is developed based on allocation problem of capacitated planning model. At the first stage, the capacity and the cost of transportation are considered to create an initial capacitated planning model. Then, the inventory holding and shortage costs are added to the model. These additional parameters of inventory costs lead the model to be more realistic and accurate.
Planning Models for Tuberculosis Control Programs
Chorba, Ronald W.; Sanders, J. L.
1971-01-01
A discrete-state, discrete-time simulation model of tuberculosis is presented, with submodels of preventive interventions. The model allows prediction of the prevalence of the disease over the simulation period. Preventive and control programs and their optimal budgets may be planned by using the model for cost-benefit analysis: costs are assigned to the program components and disease outcomes to determine the ratio of program expenditures to future savings on medical and socioeconomic costs of tuberculosis. Optimization is achieved by allocating funds in successive increments to alternative program components in simulation and identifying those components that lead to the greatest reduction in prevalence for the given level of expenditure. The method is applied to four hypothetical disease prevalence situations. PMID:4999448
Adaptive Path Control of Surface Ships in Restricted Waters.
1980-08-01
and Fn=0.116-- Random Walk Disturbance Model 31 6. Optimal Gains for Tokyo Mazu at H/T=- and Fn=0.116-- Random Walk Disturbance Model 39 7. RMS Cost J...yaw mass moment of inertia [kgm 2 V =21 /pL nondimensional yaw mass moment of inertia zz zz J optimal control or Weighted Least-Squares cost function...J RMS cost , eq. (70) J 5yaw added mass moment of inertia [kgm 2 iz=2Jz/pL nondimensional yaw added mass moment of inertia zz zz K Kalman-Bucy state
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Wei
2016-10-01
An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.
OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE
NASA Technical Reports Server (NTRS)
Lee, H.
1994-01-01
For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.
NASA Astrophysics Data System (ADS)
Pakpahan, Eka K. A.; Iskandar, Bermawi P.
2015-12-01
Mining industry is characterized by a high operational revenue, and hence high availability of heavy equipment used in mining industry is a critical factor to ensure the revenue target. To maintain high avaliability of the heavy equipment, the equipment's owner hires an agent to perform maintenance action. Contract is then used to control the relationship between the two parties involved. The traditional contracts such as fixed price, cost plus or penalty based contract studied is unable to push agent's performance to exceed target, and this in turn would lead to a sub-optimal result (revenue). This research deals with designing maintenance contract compensation schemes. The scheme should induce agent to select the highest possible maintenance effort level, thereby pushing agent's performance and achieve maximum utility for both parties involved. Principal agent theory is used as a modeling approach due to its ability to simultaneously modeled owner and agent decision making process. Compensation schemes considered in this research includes fixed price, cost sharing and revenue sharing. The optimal decision is obtained using a numerical method. The results show that if both parties are risk neutral, then there are infinite combination of fixed price, cost sharing and revenue sharing produced the same optimal solution. The combination of fixed price and cost sharing contract results in the optimal solution when the agent is risk averse, while the optimal combination of fixed price and revenue sharing contract is obtained when agent is risk averse. When both parties are risk averse, the optimal compensation scheme is a combination of fixed price, cost sharing and revenue sharing.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Gorelick, Steven M.
2005-03-01
We propose a framework that combines simulation optimization with Bayesian decision analysis to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas. A stochastic simulation optimization management model is employed to plan regionally distributed groundwater pumping while preserving the hydroecological balance in wetland areas. Because predictions made by an aquifer model are uncertain, groundwater supply systems operate below maximum yield. Collecting data from the groundwater system can potentially reduce predictive uncertainty and increase safe water production. The price paid for improvement in water management is the cost of collecting the additional data. Efficient data collection using Bayesian decision analysis proceeds in three stages: (1) The prior analysis determines the optimal pumping scheme and profit from water sales on the basis of known information. (2) The preposterior analysis estimates the optimal measurement locations and evaluates whether each sequential measurement will be cost-effective before it is taken. (3) The posterior analysis then revises the prior optimal pumping scheme and consequent profit, given the new information. Stochastic simulation optimization employing a multiple-realization approach is used to determine the optimal pumping scheme in each of the three stages. The cost of new data must not exceed the expected increase in benefit obtained in optimal groundwater exploitation. An example based on groundwater management practices in Florida aimed at wetland protection showed that the cost of data collection more than paid for itself by enabling a safe and reliable increase in production.
Cost-Effectiveness Analysis of Bariatric Surgery for Morbid Obesity.
Alsumali, Adnan; Eguale, Tewodros; Bairdain, Sigrid; Samnaliev, Mihail
2018-01-15
In the USA, three types of bariatric surgeries are widely performed, including laparoscopic sleeve gastrectomy (LSG), laparoscopic Roux-en-Y gastric bypass (LRYGB), and laparoscopic adjustable gastric banding (LAGB). However, few economic evaluations of bariatric surgery are published. There is also scarcity of studies focusing on the LSG alone. Therefore, this study is evaluating the cost-effectiveness of bariatric surgery using LRYGB, LAGB, and LSG as treatment for morbid obesity. A microsimulation model was developed over a lifetime horizon to simulate weight change, health consequences, and costs of bariatric surgery for morbid obesity. US health care prospective was used. A model was propagated based on a report from the first report of the American College of Surgeons. Incremental cost-effectiveness ratios (ICERs) in terms of cost per quality-adjusted life-year (QALY) gained were used in the model. Model parameters were estimated from publicly available databases and published literature. LRYGB was cost-effective with higher QALYs (17.07) and cost ($138,632) than LSG (16.56 QALYs; $138,925), LAGB (16.10 QALYs; $135,923), and no surgery (15.17 QALYs; $128,284). Sensitivity analysis showed initial cost of surgery and weight regain assumption were very sensitive to the variation in overall model parameters. Across patient groups, LRYGB remained the optimal bariatric technique, except that with morbid obesity 1 (BMI 35-39.9 kg/m 2 ) patients, LSG was the optimal choice. LRYGB is the optimal bariatric technique, being the most cost-effective compared to LSG, LAGB, and no surgery options for most subgroups. However, LSG was the most cost-effective choice when initial BMI ranged between 35 and 39.9 kg/m 2 .
RAPID REMOVAL OF A GROUNDWATER CONTAMINANT PLUME.
Lefkoff, L. Jeff; Gorelick, Steven M.; ,
1985-01-01
A groundwater management model is used to design an aquifer restoration system that removes a contaminant plume from a hypothetical aquifer in four years. The design model utilizes groundwater flow simulation and mathematical optimization. Optimal pumping and injection strategies achieve rapid restoration for a minimum total pumping cost. Rapid restoration is accomplished by maintaining specified groundwater velocities around the plume perimeter towards a group of pumping wells located near the plume center. The model does not account for hydrodynamic dispersion. Results show that pumping costs are particularly sensitive to injection capacity. An 8 percent decrease in the maximum allowable injection rate may lead to a 29 percent increase in total pumping costs.
Sinha, Snehal K; Kumar, Mithilesh; Guria, Chandan; Kumar, Anup; Banerjee, Chiranjib
2017-10-01
Algal model based multi-objective optimization using elitist non-dominated sorting genetic algorithm with inheritance was carried out for batch cultivation of Dunaliella tertiolecta using NPK-fertilizer. Optimization problems involving two- and three-objective functions were solved simultaneously. The objective functions are: maximization of algae-biomass and lipid productivity with minimization of cultivation time and cost. Time variant light intensity and temperature including NPK-fertilizer, NaCl and NaHCO 3 loadings are the important decision variables. Algal model involving Monod/Andrews adsorption kinetics and Droop model with internal nutrient cell quota was used for optimization studies. Sets of non-dominated (equally good) Pareto optimal solutions were obtained for the problems studied. It was observed that time variant optimal light intensity and temperature trajectories, including optimum NPK fertilizer, NaCl and NaHCO 3 concentration has significant influence to improve biomass and lipid productivity under minimum cultivation time and cost. Proposed optimization studies may be helpful to implement the control strategy in scale-up operation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Watershed Management Optimization Support Tool (WMOST) v2: User Manual and Case Studies
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that evaluates the relative cost-effectiveness of management practices at the local or watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed c...
Research on reverse logistics location under uncertainty environment based on grey prediction
NASA Astrophysics Data System (ADS)
Zhenqiang, Bao; Congwei, Zhu; Yuqin, Zhao; Quanke, Pan
This article constructs reverse logistic network based on uncertain environment, integrates the reverse logistics network and distribution network, and forms a closed network. An optimization model based on cost is established to help intermediate center, manufacturing center and remanufacturing center make location decision. A gray model GM (1, 1) is used to predict the product holdings of the collection points, and then prediction results are carried into the cost optimization model and a solution is got. Finally, an example is given to verify the effectiveness and feasibility of the model.
Optimal transport and the placenta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Simon; Xia, Qinglan; Salafia, Carolym
2010-01-01
The goal of this paper is to investigate the expected effects of (i) placental size, (ii) placental shape and (iii) the position of insertion of the umbilical cord on the work done by the foetus heart in pumping blood across the placenta. We use optimal transport theory and modeling to quantify the expected effects of these factors . Total transport cost and the shape factor contribution to cost are given by the optimal transport model. Total placental transport cost is highly correlated with birth weight, placenta weight, FPR and the metabolic scaling factor beta. The shape factor is also highlymore » correlated with birth weight, and after adjustment for placental weight, is highly correlated with the metabolic scaling factor beta.« less
Berret, Bastien; Darlot, Christian; Jean, Frédéric; Pozzo, Thierry; Papaxanthis, Charalambos; Gauthier, Jean Paul
2008-01-01
An important question in the literature focusing on motor control is to determine which laws drive biological limb movements. This question has prompted numerous investigations analyzing arm movements in both humans and monkeys. Many theories assume that among all possible movements the one actually performed satisfies an optimality criterion. In the framework of optimal control theory, a first approach is to choose a cost function and test whether the proposed model fits with experimental data. A second approach (generally considered as the more difficult) is to infer the cost function from behavioral data. The cost proposed here includes a term called the absolute work of forces, reflecting the mechanical energy expenditure. Contrary to most investigations studying optimality principles of arm movements, this model has the particularity of using a cost function that is not smooth. First, a mathematical theory related to both direct and inverse optimal control approaches is presented. The first theoretical result is the Inactivation Principle, according to which minimizing a term similar to the absolute work implies simultaneous inactivation of agonistic and antagonistic muscles acting on a single joint, near the time of peak velocity. The second theoretical result is that, conversely, the presence of non-smoothness in the cost function is a necessary condition for the existence of such inactivation. Second, during an experimental study, participants were asked to perform fast vertical arm movements with one, two, and three degrees of freedom. Observed trajectories, velocity profiles, and final postures were accurately simulated by the model. In accordance, electromyographic signals showed brief simultaneous inactivation of opposing muscles during movements. Thus, assuming that human movements are optimal with respect to a certain integral cost, the minimization of an absolute-work-like cost is supported by experimental observations. Such types of optimality criteria may be applied to a large range of biological movements. PMID:18949023
Minimum cost to control bovine tuberculosis in cow-calf herds
Smith, Rebecca L.; Tauer, Loren W.; Sanderson, Michael W.; Grohn, Yrjo T.
2014-01-01
Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2–3 month testing interval. However, the optimal testing interval for governments was increased to 2–4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal’s herd of origin can be identified would result in faster detection times. PMID:24703601
Minimum cost to control bovine tuberculosis in cow-calf herds.
Smith, Rebecca L; Tauer, Loren W; Sanderson, Michael W; Gröhn, Yrjo T
2014-07-01
Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2-3 month testing interval. However, the optimal testing interval for governments was increased to 2-4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal's herd of origin can be identified would result in faster detection times. Copyright © 2014 Elsevier B.V. All rights reserved.
Asnoune, M; Abdelmalek, F; Djelloul, A; Mesghouni, K; Addou, A
2016-11-01
In household waste matters, the objective is always to conceive an optimal integrated system of management, where the terms 'optimal' and 'integrated' refer generally to a combination between the waste and the techniques of treatment, valorization and elimination, which often aim at the lowest possible cost. The management optimization of household waste using operational methodologies has not yet been applied in any Algerian district. We proposed an optimization of the valorization of household waste in Tiaret city in order to lower the total management cost. The methodology is modelled by non-linear mathematical equations using 28 variables of decision and aims to assign optimally the seven components of household waste (i.e. plastic, cardboard paper, glass, metals, textiles, organic matter and others) among four centres of treatment [i.e. waste to energy (WTE) or incineration, composting (CM), anaerobic digestion (ANB) or methanization and landfilling (LF)]. The analysis of the obtained results shows that the variation of total cost is mainly due to the assignment of waste among the treatment centres and that certain treatment cannot be applied to household waste in Tiaret city. On the other hand, certain techniques of valorization have been favoured by the optimization. In this work, four scenarios have been proposed to optimize the system cost, where the modelling shows that the mixed scenario (the three treatment centres CM, ANB, LF) suggests a better combination of technologies of waste treatment, with an optimal solution for the system (cost and profit). © The Author(s) 2016.
Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2012-03-01
In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.
Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
Applications of polynomial optimization in financial risk investment
NASA Astrophysics Data System (ADS)
Zeng, Meilan; Fu, Hongwei
2017-09-01
Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.
Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven
2010-05-01
Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Alasdair; Thomsen, Edwin; Reed, David
2016-04-20
A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less
Optimal control for a tuberculosis model with undetected cases in Cameroon
NASA Astrophysics Data System (ADS)
Moualeu, D. P.; Weiser, M.; Ehrig, R.; Deuflhard, P.
2015-03-01
This paper considers the optimal control of tuberculosis through education, diagnosis campaign and chemoprophylaxis of latently infected. A mathematical model which includes important components such as undiagnosed infectious, diagnosed infectious, latently infected and lost-sight infectious is formulated. The model combines a frequency dependent and a density dependent force of infection for TB transmission. Through optimal control theory and numerical simulations, a cost-effective balance of two different intervention methods is obtained. Seeking to minimize the amount of money the government spends when tuberculosis remain endemic in the Cameroonian population, Pontryagin's maximum principle is used to characterize the optimal control. The optimality system is derived and solved numerically using the forward-backward sweep method (FBSM). Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. It comes out that combining chemoprophylaxis and education, the burden of TB can be reduced by 80% in 10 years.
Pricing strategy in a dual-channel and remanufacturing supply chain system
NASA Astrophysics Data System (ADS)
Jiang, Chengzhi; Xu, Feng; Sheng, Zhaohan
2010-07-01
This article addresses the pricing strategy problems in a supply chain system where the manufacturer sells original products and remanufactured products via indirect retailer channels and direct Internet channels. Due to the complexity of that system, agent technologies that provide a new way for analysing complex systems are used for modelling. Meanwhile, in order to reduce the computational load of searching procedure for optimal prices and profits, a learning search algorithm is designed and implemented within the multi-agent supply chain model. The simulation results show that the proposed model can find out optimal prices of original products and remanufactured products in both channels, which lead to optimal profits of the manufacturer and the retailer. It is also found that the optimal profits are increased by introducing direct channel and remanufacturing. Furthermore, the effect of customer preference, direct channel cost and remanufactured unit cost on optimal prices and profits are examined.
Ranked set sampling: cost and optimal set size.
Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying
2002-12-01
McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.
A unified RANS–LES model: Computational development, accuracy and cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopalan, Harish, E-mail: hgopalan@uwyo.edu; Heinz, Stefan, E-mail: heinz@uwyo.edu; Stöllinger, Michael K., E-mail: MStoell@uwyo.edu
2013-09-15
Large eddy simulation (LES) is computationally extremely expensive for the investigation of wall-bounded turbulent flows at high Reynolds numbers. A way to reduce the computational cost of LES by orders of magnitude is to combine LES equations with Reynolds-averaged Navier–Stokes (RANS) equations used in the near-wall region. A large variety of such hybrid RANS–LES methods are currently in use such that there is the question of which hybrid RANS-LES method represents the optimal approach. The properties of an optimal hybrid RANS–LES model are formulated here by taking reference to fundamental properties of fluid flow equations. It is shown that unifiedmore » RANS–LES models derived from an underlying stochastic turbulence model have the properties of optimal hybrid RANS–LES models. The rest of the paper is organized in two parts. First, a priori and a posteriori analyses of channel flow data are used to find the optimal computational formulation of the theoretically derived unified RANS–LES model and to show that this computational model, which is referred to as linear unified model (LUM), does also have all the properties of an optimal hybrid RANS–LES model. Second, a posteriori analyses of channel flow data are used to study the accuracy and cost features of the LUM. The following conclusions are obtained. (i) Compared to RANS, which require evidence for their predictions, the LUM has the significant advantage that the quality of predictions is relatively independent of the RANS model applied. (ii) Compared to LES, the significant advantage of the LUM is a cost reduction of high-Reynolds number simulations by a factor of 0.07Re{sup 0.46}. For coarse grids, the LUM has a significant accuracy advantage over corresponding LES. (iii) Compared to other usually applied hybrid RANS–LES models, it is shown that the LUM provides significantly improved predictions.« less
Dynamic optimal strategies in transboundary pollution game under learning by doing
NASA Astrophysics Data System (ADS)
Chang, Shuhua; Qin, Weihua; Wang, Xinyu
2018-01-01
In this paper, we present a transboundary pollution game, in which emission permits trading and pollution abatement costs under learning by doing are considered. In this model, the abatement cost mainly depends on the level of pollution abatement and the experience of using pollution abatement technology. We use optimal control theory to investigate the optimal emission paths and the optimal pollution abatement strategies under cooperative and noncooperative games, respectively. Additionally, the effects of parameters on the results have been examined.
Yang, Guoxiang; Best, Elly P H
2015-09-15
Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vijayashree, M.; Uthayakumar, R.
2017-09-01
Lead time is one of the major limits that affect planning at every stage of the supply chain system. In this paper, we study a continuous review inventory model. This paper investigates the ordering cost reductions are dependent on lead time. This study addressed two-echelon supply chain problem consisting of a single vendor and a single buyer. The main contribution of this study is that the integrated total cost of the single vendor and the single buyer integrated system is analyzed by adopting two different (linear and logarithmic) types ordering cost reductions act dependent on lead time. In both cases, we develop effective solution procedures for finding the optimal solution and then illustrative numerical examples are given to illustrate the results. The solution procedure is to determine the optimal solutions of order quantity, ordering cost, lead time and the number of deliveries from the single vendor and the single buyer in one production run, so that the integrated total cost incurred has the minimum value. Ordering cost reduction is the main aspect of the proposed model. A numerical example is given to validate the model. Numerical example solved by using Matlab software. The mathematical model is solved analytically by minimizing the integrated total cost. Furthermore, the sensitivity analysis is included and the numerical examples are given to illustrate the results. The results obtained in this paper are illustrated with the help of numerical examples. The sensitivity of the proposed model has been checked with respect to the various major parameters of the system. Results reveal that the proposed integrated inventory model is more applicable for the supply chain manufacturing system. For each case, an algorithm procedure of finding the optimal solution is developed. Finally, the graphical representation is presented to illustrate the proposed model and also include the computer flowchart in each model.
Akhtar, Mahmuda; Hannan, M A; Begum, R A; Basri, Hassan; Scavino, Edgar
2017-03-01
Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO 2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO 2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Two-phase strategy of controlling motor coordination determined by task performance optimality.
Shimansky, Yury P; Rand, Miya K
2013-02-01
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.
NASA Astrophysics Data System (ADS)
Shu, Hui; Zhou, Xideng
2014-05-01
The single-vendor single-buyer integrated production inventory system has been an object of study for a long time, but little is known about the effect of investing in reducing setup cost reduction and process-quality improvement for an integrated inventory system in which the products are sold with free minimal repair warranty. The purpose of this article is to minimise the integrated cost by optimising simultaneously the number of shipments and the shipment quantity, the setup cost, and the process quality. An efficient algorithm procedure is proposed for determining the optimal decision variables. A numerical example is presented to illustrate the results of the proposed models graphically. Sensitivity analysis of the model with respect to key parameters of the system is carried out. The paper shows that the proposed integrated model can result in significant savings in the integrated cost.
NASA Astrophysics Data System (ADS)
Niakan, F.; Vahdani, B.; Mohammadi, M.
2015-12-01
This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.
Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario
2014-01-01
In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.
An interprovincial cooperative game model for air pollution control in China.
Xue, Jian; Zhao, Laijun; Fan, Longzhen; Qian, Ying
2015-07-01
The noncooperative air pollution reduction model (NCRM) that is currently adopted in China to manage air pollution reduction of each individual province has inherent drawbacks. In this paper, we propose a cooperative air pollution reduction game model (CRM) that consists of two parts: (1) an optimization model that calculates the optimal pollution reduction quantity for each participating province to meet the joint pollution reduction goal; and (2) a model that distribute the economic benefit of the cooperation (i.e., pollution reduction cost saving) among the provinces in the cooperation based on the Shapley value method. We applied the CRM to the case of SO2 reduction in the Beijing-Tianjin-Hebei region in China. The results, based on the data from 2003-2009, show that cooperation helps lower the overall SO2 pollution reduction cost from 4.58% to 11.29%. Distributed across the participating provinces, such a cost saving from interprovincial cooperation brings significant benefits to each local government and stimulates them for further cooperation in pollution reduction. Finally, sensitivity analysis is performed using the year 2009 data to test the parameters' effects on the pollution reduction cost savings. China is increasingly facing unprecedented pressure for immediate air pollution control. The current air pollution reduction policy does not allow cooperation and is less efficient. In this paper we developed a cooperative air pollution reduction game model that consists of two parts: (1) an optimization model that calculates the optimal pollution reduction quantity for each participating province to meet the joint pollution reduction goal; and (2) a model that distributes the cooperation gains (i.e., cost reduction) among the provinces in the cooperation based on the Shapley value method. The empirical case shows that such a model can help improve efficiency in air pollution reduction. The result of the model can serve as a reference for Chinese government pollution reduction policy design.
Wu, Ling; Liu, Xiang-Nan; Zhou, Bo-Tian; Liu, Chuan-Hao; Li, Lu-Feng
2012-12-01
This study analyzed the sensitivities of three vegetation biochemical parameters [chlorophyll content (Cab), leaf water content (Cw), and leaf area index (LAI)] to the changes of canopy reflectance, with the effects of each parameter on the wavelength regions of canopy reflectance considered, and selected three vegetation indices as the optimization comparison targets of cost function. Then, the Cab, Cw, and LAI were estimated, based on the particle swarm optimization algorithm and PROSPECT + SAIL model. The results showed that retrieval efficiency with vegetation indices as the optimization comparison targets of cost function was better than that with all spectral reflectance. The correlation coefficients (R2) between the measured and estimated values of Cab, Cw, and LAI were 90.8%, 95.7%, and 99.7%, and the root mean square errors of Cab, Cw, and LAI were 4.73 microg x cm(-2), 0.001 g x cm(-2), and 0.08, respectively. It was suggested that to adopt vegetation indices as the optimization comparison targets of cost function could effectively improve the efficiency and precision of the retrieval of biochemical parameters based on PROSPECT + SAIL model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hummon, M.; Jorgenson, J.; Denholm, P.
2013-10-01
Concentrating solar power with thermal energy storage (CSP-TES) can provide multiple benefits to the grid, including low marginal cost energy and the ability to levelize load, provide operating reserves, and provide firm capacity. It is challenging to properly value the integration of CSP because of the complicated nature of this technology. Unlike completely dispatchable fossil sources, CSP is a limited energy resource, depending on the hourly and daily supply of solar energy. To optimize the use of this limited energy, CSP-TES must be implemented in a production cost model with multiple decision variables for the operation of the CSP-TES plant.more » We develop and implement a CSP-TES plant in a production cost model that accurately characterizes the three main components of the plant: solar field, storage tank, and power block. We show the effect of various modelling simplifications on the value of CSP, including: scheduled versus optimized dispatch from the storage tank and energy-only operation versus co-optimization with ancillary services.« less
Modelling Concentrating Solar Power with Thermal Energy Storage for Integration Studies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hummon, M.; Denholm, P.; Jorgenson, J.
2013-10-01
Concentrating solar power with thermal energy storage (CSP-TES) can provide multiple benefits to the grid, including low marginal cost energy and the ability to levelize load, provide operating reserves, and provide firm capacity. It is challenging to properly value the integration of CSP because of the complicated nature of this technology. Unlike completely dispatchable fossil sources, CSP is a limited energy resource, depending on the hourly and daily supply of solar energy. To optimize the use of this limited energy, CSP-TES must be implemented in a production cost model with multiple decision variables for the operation of the CSP-TES plant.more » We develop and implement a CSP-TES plant in a production cost model that accurately characterizes the three main components of the plant: solar field, storage tank, and power block. We show the effect of various modelling simplifications on the value of CSP, including: scheduled versus optimized dispatch from the storage tank and energy-only operation versus co-optimization with ancillary services.« less
Bradshaw, Jonathan L; Luthy, Richard G
2017-10-17
Infrastructure systems that use stormwater and recycled water to augment groundwater recharge through spreading basins represent cost-effective opportunities to diversify urban water supplies. However, technical questions remain about how these types of managed aquifer recharge systems should be designed; furthermore, existing planning tools are insufficient for performing robust design comparisons. Addressing this need, we present a model for identifying the best-case design and operation schedule for systems that deliver recycled water to underutilized stormwater spreading basins. Resulting systems are optimal with respect to life cycle costs and water deliveries. Through a case study of Los Angeles, California, we illustrate how delivering recycled water to spreading basins could be optimally implemented. Results illustrate trade-offs between centralized and decentralized configurations. For example, while a centralized Hyperion system could deliver more recycled water to the Hansen Spreading Grounds, this system incurs approximately twice the conveyance cost of a decentralized Tillman system (mean of 44% vs 22% of unit life cycle costs). Compared to existing methods, our model allows for more comprehensive and precise analyses of cost, water volume, and energy trade-offs among different design scenarios. This model can inform decisions about spreading basin operation policies and the development of new water supplies.
Modeling the trade-off between diet costs and methane emissions: A goal programming approach.
Moraes, L E; Fadel, J G; Castillo, A R; Casper, D P; Tricarico, J M; Kebreab, E
2015-08-01
Enteric methane emission is a major greenhouse gas from livestock production systems worldwide. Dietary manipulation may be an effective emission-reduction tool; however, the associated costs may preclude its use as a mitigation strategy. Several studies have identified dietary manipulation strategies for the mitigation of emissions, but studies examining the costs of reducing methane by manipulating diets are scarce. Furthermore, the trade-off between increase in dietary costs and reduction in methane emissions has only been determined for a limited number of production scenarios. The objective of this study was to develop an optimization framework for the joint minimization of dietary costs and methane emissions based on the identification of a set of feasible solutions for various levels of trade-off between emissions and costs. Such a set of solutions was created by the specification of a systematic grid of goal programming weights, enabling the decision maker to choose the solution that achieves the desired trade-off level. Moreover, the model enables the calculation of emission-mitigation costs imputing a trading value for methane emissions. Emission imputed costs can be used in emission-unit trading schemes, such as cap-and-trade policy designs. An application of the model using data from lactating cows from dairies in the California Central Valley is presented to illustrate the use of model-generated results in the identification of optimal diets when reducing emissions. The optimization framework is flexible and can be adapted to jointly minimize diet costs and other potential environmental impacts (e.g., nitrogen excretion). It is also flexible so that dietary costs, feed nutrient composition, and animal nutrient requirements can be altered to accommodate various production systems. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Assessment of regional management strategies for controlling seawater intrusion
Reichard, E.G.; Johnson, T.A.
2005-01-01
Simulation-optimization methods, applied with adequate sensitivity tests, can provide useful quantitative guidance for controlling seawater intrusion. This is demonstrated in an application to the West Coast Basin of coastal Los Angeles that considers two management options for improving hydraulic control of seawater intrusion: increased injection into barrier wells and in lieu delivery of surface water to replace current pumpage. For the base-case optimization analysis, assuming constant groundwater demand, in lieu delivery was determined to be most cost effective. Reduced-cost information from the optimization provided guidance for prioritizing locations for in lieu delivery. Model sensitivity to a suite of hydrologic, economic, and policy factors was tested. Raising the imposed average water-level constraint at the hydraulic-control locations resulted in nonlinear increases in cost. Systematic varying of the relative costs of injection and in lieu water yielded a trade-off curve between relative costs and injection/in lieu amounts. Changing the assumed future scenario to one of increasing pumpage in the adjacent Central Basin caused a small increase in the computed costs of seawater intrusion control. Changing the assumed boundary condition representing interaction with an adjacent basin did not affect the optimization results. Reducing the assumed hydraulic conductivity of the main productive aquifer resulted in a large increase in the model-computed cost. Journal of Water Resources Planning and Management ?? ASCE.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Topography-based Flood Planning and Optimization Capability Development Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judi, David R.; Tasseff, Byron A.; Bent, Russell W.
2014-02-26
Globally, water-related disasters are among the most frequent and costly natural hazards. Flooding inflicts catastrophic damage on critical infrastructure and population, resulting in substantial economic and social costs. NISAC is developing LeveeSim, a suite of nonlinear and network optimization models, to predict optimal barrier placement to protect critical regions and infrastructure during flood events. LeveeSim currently includes a high-performance flood model to simulate overland flow, as well as a network optimization model to predict optimal barrier placement during a flood event. The LeveeSim suite models the effects of flooding in predefined regions. By manipulating a domain’s underlying topography, developers alteredmore » flood propagation to reduce detrimental effects in areas of interest. This numerical altering of a domain’s topography is analogous to building levees, placing sandbags, etc. To induce optimal changes in topography, NISAC used a novel application of an optimization algorithm to minimize flooding effects in regions of interest. To develop LeveeSim, NISAC constructed and coupled hydrodynamic and optimization algorithms. NISAC first implemented its existing flood modeling software to use massively parallel graphics processing units (GPUs), which allowed for the simulation of larger domains and longer timescales. NISAC then implemented a network optimization model to predict optimal barrier placement based on output from flood simulations. As proof of concept, NISAC developed five simple test scenarios, and optimized topographic solutions were compared with intuitive solutions. Finally, as an early validation example, barrier placement was optimized to protect an arbitrary region in a simulation of the historic Taum Sauk dam breach.« less
NASA Astrophysics Data System (ADS)
Khamidullin, R. I.
2018-05-01
The paper is devoted to milestones of the optimal mathematical model for a business process related to cost estimate documentation compiled during construction and reconstruction of oil and gas facilities. It describes the study and analysis of fundamental issues in petroleum industry, which are caused by economic instability and deterioration of a business strategy. Business process management is presented as business process modeling aimed at the improvement of the studied business process, namely main criteria of optimization and recommendations for the improvement of the above-mentioned business model.
Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem
NASA Astrophysics Data System (ADS)
Tangpatiphan, Kritsana; Yokoyama, Akihiko
This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.
Central Plant Optimization for Waste Energy Reduction (CPOWER). ESTCP Cost and Performance Report
2016-12-01
in the regression models. The solar radiation data did not appear reliable in the weather dataset for the location, and hence it was not used. The...and additional factors (e.g., solar insolation) may be needed to obtain a better model. 2. Inputs to optimizer: During several periods of...Location: North Carolina Energy Consumption Cost Savings $ 443,698.00 Analysis Type: FEMP PV of total savings 215,698.00$ Base Date: April 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1995-12-31
The objective of this research is to develop cost-effective surfactant flooding technology by using simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. In this quarter, we have continued working on Task 2 to optimizemore » surfactant flooding design and have included economic analysis to the optimization process. An economic model was developed using a spreadsheet and the discounted cash flow (DCF) method of economic analysis. The model was designed specifically for a domestic onshore surfactant flood and has been used to economically evaluate previous work that used a technical approach to optimization. The DCF model outputs common economic decision making criteria, such as net present value (NPV), internal rate of return (IRR), and payback period.« less
Economic and environmental optimization of waste treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Münster, M.; Ravn, H.; Hedegaard, K.
2015-04-15
Highlights: • Optimizing waste treatment by incorporating LCA methodology. • Applying different objectives (minimizing costs or GHG emissions). • Prioritizing multiple objectives given different weights. • Optimum depends on objective and assumed displaced electricity production. - Abstract: This article presents the new systems engineering optimization model, OptiWaste, which incorporates a life cycle assessment (LCA) methodology and captures important characteristics of waste management systems. As part of the optimization, the model identifies the most attractive waste management options. The model renders it possible to apply different optimization objectives such as minimizing costs or greenhouse gas emissions or to prioritize several objectivesmore » given different weights. A simple illustrative case is analysed, covering alternative treatments of one tonne of residual household waste: incineration of the full amount or sorting out organic waste for biogas production for either combined heat and power generation or as fuel in vehicles. The case study illustrates that the optimal solution depends on the objective and assumptions regarding the background system – illustrated with different assumptions regarding displaced electricity production. The article shows that it is feasible to combine LCA methodology with optimization. Furthermore, it highlights the need for including the integrated waste and energy system into the model.« less
Optimizing Storage and Renewable Energy Systems with REopt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elgqvist, Emma M.; Anderson, Katherine H.; Cutler, Dylan S.
Under the right conditions, behind the meter (BTM) storage combined with renewable energy (RE) technologies can provide both cost savings and resiliency. Storage economics depend not only on technology costs and avoided utility rates, but also on how the technology is operated. REopt, a model developed at NREL, can be used to determine the optimal size and dispatch strategy for BTM or off-grid applications. This poster gives an overview of three applications of REopt: Optimizing BTM Storage and RE to Extend Probability of Surviving Outage, Optimizing Off-Grid Energy System Operation, and Optimizing Residential BTM Solar 'Plus'.
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
NASA Astrophysics Data System (ADS)
Kim, U.; Parker, J.; Borden, R. C.
2014-12-01
In-situ chemical oxidation (ISCO) has been applied at many dense non-aqueous phase liquid (DNAPL) contaminated sites. A stirred reactor-type model was developed that considers DNAPL dissolution using a field-scale mass transfer function, instantaneous reaction of oxidant with aqueous and adsorbed contaminant and with readily oxidizable natural oxygen demand ("fast NOD"), and second-order kinetic reactions with "slow NOD." DNAPL dissolution enhancement as a function of oxidant concentration and inhibition due to manganese dioxide precipitation during permanganate injection are included in the model. The DNAPL source area is divided into multiple treatment zones with different areas, depths, and contaminant masses based on site characterization data. The performance model is coupled with a cost module that involves a set of unit costs representing specific fixed and operating costs. Monitoring of groundwater and/or soil concentrations in each treatment zone is employed to assess ISCO performance and make real-time decisions on oxidant reinjection or ISCO termination. Key ISCO design variables include the oxidant concentration to be injected, time to begin performance monitoring, groundwater and/or soil contaminant concentrations to trigger reinjection or terminate ISCO, number of monitoring wells or geoprobe locations per treatment zone, number of samples per sampling event and location, and monitoring frequency. Design variables for each treatment zone may be optimized to minimize expected cost over a set of Monte Carlo simulations that consider uncertainty in site parameters. The model is incorporated in the Stochastic Cost Optimization Toolkit (SCOToolkit) program, which couples the ISCO model with a dissolved plume transport model and with modules for other remediation strategies. An example problem is presented that illustrates design tradeoffs required to deal with characterization and monitoring uncertainty. Monitoring soil concentration changes during ISCO was found to be important to avoid decision errors associated with slow rebound of groundwater concentrations.
Robust optimisation-based microgrid scheduling with islanding constraints
Liu, Guodong; Starke, Michael; Xiao, Bailu; ...
2017-02-17
This paper proposes a robust optimization based optimal scheduling model for microgrid operation considering constraints of islanding capability. Our objective is to minimize the total operation cost, including generation cost and spinning reserve cost of local resources as well as purchasing cost of energy from the main grid. In order to ensure the resiliency of a microgrid and improve the reliability of the local electricity supply, the microgrid is required to maintain enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation when the supply of power from the main grid is interrupted suddenly,more » i.e., microgrid transitions from grid-connected into islanded mode. Prevailing operational uncertainties in renewable energy resources and load are considered and captured using a robust optimization method. With proper robust level, the solution of the proposed scheduling model ensures successful islanding of the microgrid with minimum load curtailment and guarantees robustness against all possible realizations of the modeled operational uncertainties. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling model.« less
Multi-objective group scheduling optimization integrated with preventive maintenance
NASA Astrophysics Data System (ADS)
Liao, Wenzhu; Zhang, Xiufang; Jiang, Min
2017-11-01
This article proposes a single-machine-based integration model to meet the requirements of production scheduling and preventive maintenance in group production. To describe the production for identical/similar and different jobs, this integrated model considers the learning and forgetting effects. Based on machine degradation, the deterioration effect is also considered. Moreover, perfect maintenance and minimal repair are adopted in this integrated model. The multi-objective of minimizing total completion time and maintenance cost is taken to meet the dual requirements of delivery date and cost. Finally, a genetic algorithm is developed to solve this optimization model, and the computation results demonstrate that this integrated model is effective and reliable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durfee, Justin David; Frazier, Christopher Rawls; Bandlow, Alisa
This document describes the final software design of the Contingency Contractor Optimization Tool - Prototype. Its purpose is to provide the overall architecture of the software and the logic behind this architecture. Documentation for the individual classes is provided in the application Javadoc. The Contingency Contractor Optimization project is intended to address Department of Defense mandates by delivering a centralized strategic planning tool that allows senior decision makers to quickly and accurately assess the impacts, risks, and mitigation strategies associated with utilizing contract support. The Contingency Contractor Optimization Tool - Prototype was developed in Phase 3 of the OSD ATLmore » Contingency Contractor Optimization project to support strategic planning for contingency contractors. The planning tool uses a model to optimize the Total Force mix by minimizing the combined total costs for selected mission scenarios. The model optimizes the match of personnel types (military, DoD civilian, and contractors) and capabilities to meet mission requirements as effectively as possible, based on risk, cost, and other requirements.« less
Optimization study on multiple train formation scheme of urban rail transit
NASA Astrophysics Data System (ADS)
Xia, Xiaomei; Ding, Yong; Wen, Xin
2018-05-01
The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.
Capacity planning for batch and perfusion bioprocesses across multiple biopharmaceutical facilities.
Siganporia, Cyrus C; Ghosh, Soumitra; Daszkowski, Thomas; Papageorgiou, Lazaros G; Farid, Suzanne S
2014-01-01
Production planning for biopharmaceutical portfolios becomes more complex when products switch between fed-batch and continuous perfusion culture processes. This article describes the development of a discrete-time mixed integer linear programming (MILP) model to optimize capacity plans for multiple biopharmaceutical products, with either batch or perfusion bioprocesses, across multiple facilities to meet quarterly demands. The model comprised specific features to account for products with fed-batch or perfusion culture processes such as sequence-dependent changeover times, continuous culture constraints, and decoupled upstream and downstream operations that permit independent scheduling of each. Strategic inventory levels were accounted for by applying cost penalties when they were not met. A rolling time horizon methodology was utilized in conjunction with the MILP model and was shown to obtain solutions with greater optimality in less computational time than the full-scale model. The model was applied to an industrial case study to illustrate how the framework aids decisions regarding outsourcing capacity to third party manufacturers or building new facilities. The impact of variations on key parameters such as demand or titres on the optimal production plans and costs was captured. The analysis identified the critical ratio of in-house to contract manufacturing organization (CMO) manufacturing costs that led the optimization results to favor building a future facility over using a CMO. The tool predicted that if titres were higher than expected then the optimal solution would allocate more production to in-house facilities, where manufacturing costs were lower. Utilization graphs indicated when capacity expansion should be considered. © 2014 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.
Capacity Planning for Batch and Perfusion Bioprocesses Across Multiple Biopharmaceutical Facilities
Siganporia, Cyrus C; Ghosh, Soumitra; Daszkowski, Thomas; Papageorgiou, Lazaros G; Farid, Suzanne S
2014-01-01
Production planning for biopharmaceutical portfolios becomes more complex when products switch between fed-batch and continuous perfusion culture processes. This article describes the development of a discrete-time mixed integer linear programming (MILP) model to optimize capacity plans for multiple biopharmaceutical products, with either batch or perfusion bioprocesses, across multiple facilities to meet quarterly demands. The model comprised specific features to account for products with fed-batch or perfusion culture processes such as sequence-dependent changeover times, continuous culture constraints, and decoupled upstream and downstream operations that permit independent scheduling of each. Strategic inventory levels were accounted for by applying cost penalties when they were not met. A rolling time horizon methodology was utilized in conjunction with the MILP model and was shown to obtain solutions with greater optimality in less computational time than the full-scale model. The model was applied to an industrial case study to illustrate how the framework aids decisions regarding outsourcing capacity to third party manufacturers or building new facilities. The impact of variations on key parameters such as demand or titres on the optimal production plans and costs was captured. The analysis identified the critical ratio of in-house to contract manufacturing organization (CMO) manufacturing costs that led the optimization results to favor building a future facility over using a CMO. The tool predicted that if titres were higher than expected then the optimal solution would allocate more production to in-house facilities, where manufacturing costs were lower. Utilization graphs indicated when capacity expansion should be considered. © 2013 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 30:594–606, 2014 PMID:24376262
An effective and comprehensive model for optimal rehabilitation of separate sanitary sewer systems.
Diogo, António Freire; Barros, Luís Tiago; Santos, Joana; Temido, Jorge Santos
2018-01-15
In the field of rehabilitation of separate sanitary sewer systems, a large number of technical, environmental, and economic aspects are often relevant in the decision-making process, which may be modelled as a multi-objective optimization problem. Examples are those related with the operation and assessment of networks, optimization of structural, hydraulic, sanitary, and environmental performance, rehabilitation programmes, and execution works. In particular, the cost of investment, operation and maintenance needed to reduce or eliminate Infiltration from the underground water table and Inflows of storm water surface runoff (I/I) using rehabilitation techniques or related methods can be significantly lower than the cost of transporting and treating these flows throughout the lifespan of the systems or period studied. This paper presents a comprehensive I/I cost-benefit approach for rehabilitation that explicitly considers all elements of the systems and shows how the approximation is incorporated as an objective function in a general evolutionary multi-objective optimization model. It takes into account network performance and wastewater treatment costs, average values of several input variables, and rates that can reflect the adoption of different predictable or limiting scenarios. The approach can be used as a practical and fast tool to support decision-making in sewer network rehabilitation in any phase of a project. The fundamental aspects, modelling, implementation details and preliminary results of a two-objective optimization rehabilitation model using a genetic algorithm, with a second objective function related to the structural condition of the network and the service failure risk, are presented. The basic approach is applied to three real world cases studies of sanitary sewerage systems in Coimbra and the results show the simplicity, suitability, effectiveness, and usefulness of the approximation implemented and of the objective function proposed. Copyright © 2017 Elsevier B.V. All rights reserved.
Cárdenas, María Kathia; Mirelman, Andrew J; Galvin, Cooper J; Lazo-Porras, María; Pinto, Miguel; Miranda, J Jaime; Gilman, Robert H
2015-10-26
Diabetes mellitus is a public health challenge worldwide, and roughly 25% of patients with diabetes in developing countries will develop at least one foot ulcer during their lifetime. The gravest outcome of an ulcerated foot is amputation, leading to premature death and larger economic costs. This study aimed to estimate the economic costs of diabetic foot in high-risk patients in Peru in 2012 and to model the cost-effectiveness of a year-long preventive strategy for foot ulceration including: sub-optimal care (baseline), standard care as recommended by the International Diabetes Federation, and standard care plus daily self-monitoring of foot temperature. A decision tree model using a population prevalence-based approach was used to calculate the costs and the incremental cost-effectiveness ratio (ICER). Outcome measures were deaths and major amputations, uncertainty was tested with a one-way sensitivity analysis. The direct costs for prevention and management with sub-optimal care for high-risk diabetics is around US$74.5 million dollars in a single year, which decreases to US$71.8 million for standard care and increases to US$96.8 million for standard care plus temperature monitoring. The implementation of a standard care strategy would avert 791 deaths and is cost-saving in comparison to sub-optimal care. For standard care plus temperature monitoring compared to sub-optimal care the ICER rises to US$16,124 per death averted and averts 1,385 deaths. Diabetic foot complications are highly costly and largely preventable in Peru. The implementation of a standard care strategy would lead to net savings and avert deaths over a one-year period. More intensive prevention strategies such as incorporating temperature monitoring may also be cost-effective.
An Extended EPQ-Based Problem with a Discontinuous Delivery Policy, Scrap Rate, and Random Breakdown
Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P.
2015-01-01
In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results. PMID:25821853
Chiu, Singa Wang; Lin, Hong-Dar; Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P
2015-01-01
In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results.
An approach to modeling and optimization of integrated renewable energy system (ires)
NASA Astrophysics Data System (ADS)
Maheshwari, Zeel
The purpose of this study was to cost optimize electrical part of IRES (Integrated Renewable Energy Systems) using HOMER and maximize the utilization of resources using MATLAB programming. IRES is an effective and a viable strategy that can be employed to harness renewable energy resources to energize remote rural areas of developing countries. The resource- need matching, which is the basis for IRES makes it possible to provide energy in an efficient and cost effective manner. Modeling and optimization of IRES for a selected study area makes IRES more advantageous when compared to hybrid concepts. A remote rural area with a population of 700 in 120 households and 450 cattle is considered as an example for cost analysis and optimization. Mathematical models for key components of IRES such as biogas generator, hydropower generator, wind turbine, PV system and battery banks are developed. A discussion of the size of water reservoir required is also presented. Modeling of IRES on the basis of need to resource and resource to need matching is pursued to help in optimum use of resources for the needs. Fixed resources such as biogas and water are used in prioritized order whereas movable resources such as wind and solar can be used simultaneously for different priorities. IRES is cost optimized for electricity demand using HOMER software that is developed by the NREL (National Renewable Energy Laboratory). HOMER optimizes configuration for electrical demand only and does not consider other demands such as biogas for cooking and water for domestic and irrigation purposes. Hence an optimization program based on the need-resource modeling of IRES is performed in MATLAB. Optimization of the utilization of resources for several needs is performed. Results obtained from MATLAB clearly show that the available resources can fulfill the demand of the rural areas. Introduction of IRES in rural communities has many socio-economic implications. It brings about improvement in living environment and community welfare by supplying the basic needs such as biogas for cooking, water for domestic and irrigation purposes and electrical energy for lighting, communication, cold storage, educational and small- scale industrial purposes.
Congestion Pricing for Aircraft Pushback Slot Allocation.
Liu, Lihua; Zhang, Yaping; Liu, Lan; Xing, Zhiwei
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the "external cost of surface congestion" is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm.
Congestion Pricing for Aircraft Pushback Slot Allocation
Zhang, Yaping
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the “external cost of surface congestion” is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm. PMID:28114429
NASA Astrophysics Data System (ADS)
Khannan, M. S. A.; Nafisah, L.; Palupi, D. L.
2018-03-01
Sari Warna Co. Ltd, a company engaged in the textile industry, is experiencing problems in the allocation and placement of goods in the warehouse. During this time the company has not implemented the product flow type allocation and product placement to the respective products resulting in a high total material handling cost. Therefore, this study aimed to determine the allocation and placement of goods in the warehouse corresponding to product flow type with minimal total material handling cost. This research is a quantitative research based on the theory of storage and warehouse that uses a mathematical model of optimization problem solving using mathematical optimization model approach belongs to Heragu (2005), aided by software LINGO 11.0 in the calculation of the optimization model. Results obtained from this study is the proportion of the distribution for each functional area is the area of cross-docking at 0.0734, the reserve area at 0.1894, and the forward area at 0.7372. The allocation of product flow type 1 is 5 products, the product flow type 2 is 9 products, the product flow type 3 is 2 products, and the product flow type 4 is 6 products. The optimal total material handling cost by using this mathematical model equal to Rp43.079.510 while it is equal to Rp 49.869.728 by using the company’s existing method. It saves Rp6.790.218 for the total material handling cost. Thus, all of the products can be allocated in accordance with the product flow type with minimal total material handling cost.
Optimization of hydrometric monitoring network in urban drainage systems using information theory.
Yazdi, J
2017-10-01
Regular and continuous monitoring of urban runoff in both quality and quantity aspects is of great importance for controlling and managing surface runoff. Due to the considerable costs of establishing new gauges, optimization of the monitoring network is essential. This research proposes an approach for site selection of new discharge stations in urban areas, based on entropy theory in conjunction with multi-objective optimization tools and numerical models. The modeling framework provides an optimal trade-off between the maximum possible information content and the minimum shared information among stations. This approach was applied to the main surface-water collection system in Tehran to determine new optimal monitoring points under the cost considerations. Experimental results on this drainage network show that the obtained cost-effective designs noticeably outperform the consulting engineers' proposal in terms of both information contents and shared information. The research also determined the highly frequent sites at the Pareto front which might be important for decision makers to give a priority for gauge installation on those locations of the network.
Optimization of cascading failure on complex network based on NNIA
NASA Astrophysics Data System (ADS)
Zhu, Qian; Zhu, Zhiliang; Qi, Yi; Yu, Hai; Xu, Yanjie
2018-07-01
Recently, the robustness of networks under cascading failure has attracted extensive attention. Different from previous studies, we concentrate on how to improve the robustness of the networks from the perspective of intelligent optimization. We establish two multi-objective optimization models that comprehensively consider the operational cost of the edges in the networks and the robustness of the networks. The NNIA (Non-dominated Neighbor Immune Algorithm) is applied to solve the optimization models. We finished simulations of the Barabási-Albert (BA) network and Erdös-Rényi (ER) network. In the solutions, we find the edges that can facilitate the propagation of cascading failure and the edges that can suppress the propagation of cascading failure. From the conclusions, we take optimal protection measures to weaken the damage caused by cascading failures. We also consider actual situations of operational cost feasibility of the edges. People can make a more practical choice based on the operational cost. Our work will be helpful in the design of highly robust networks or improvement of the robustness of networks in the future.
Comparative analysis for various redox flow batteries chemistries using a cost performance model
NASA Astrophysics Data System (ADS)
Crawford, Alasdair; Viswanathan, Vilayanur; Stephenson, David; Wang, Wei; Thomsen, Edwin; Reed, David; Li, Bin; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent
2015-10-01
The total energy storage system cost is determined by means of a robust performance-based cost model for multiple flow battery chemistries. Systems aspects such as shunt current losses, pumping losses and various flow patterns through electrodes are accounted for. The system cost minimizing objective function determines stack design by optimizing the state of charge operating range, along with current density and current-normalized flow. The model cost estimates are validated using 2-kW stack performance data for the same size electrodes and operating conditions. Using our validated tool, it has been demonstrated that an optimized all-vanadium system has an estimated system cost of < 350 kWh-1 for 4-h application. With an anticipated decrease in component costs facilitated by economies of scale from larger production volumes, coupled with performance improvements enabled by technology development, the system cost is expected to decrease to 160 kWh-1 for a 4-h application, and to 100 kWh-1 for a 10-h application. This tool has been shared with the redox flow battery community to enable cost estimation using their stack data and guide future direction.
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
Modification Propagation in Complex Networks
NASA Astrophysics Data System (ADS)
Mouronte, Mary Luz; Vargas, María Luisa; Moyano, Luis Gregorio; Algarra, Francisco Javier García; Del Pozo, Luis Salvador
To keep up with rapidly changing conditions, business systems and their associated networks are growing increasingly intricate as never before. By doing this, network management and operation costs not only rise, but are difficult even to measure. This fact must be regarded as a major constraint to system optimization initiatives, as well as a setback to derived economic benefits. In this work we introduce a simple model in order to estimate the relative cost associated to modification propagation in complex architectures. Our model can be used to anticipate costs caused by network evolution, as well as for planning and evaluating future architecture development while providing benefit optimization.
NASA Astrophysics Data System (ADS)
Hanish Nithin, Anu; Omenzetter, Piotr
2017-04-01
Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.
Optimal Micropatterns in 2D Transport Networks and Their Relation to Image Inpainting
NASA Astrophysics Data System (ADS)
Brancolini, Alessio; Rossmanith, Carolin; Wirth, Benedikt
2018-04-01
We consider two different variational models of transport networks: the so-called branched transport problem and the urban planning problem. Based on a novel relation to Mumford-Shah image inpainting and techniques developed in that field, we show for a two-dimensional situation that both highly non-convex network optimization tasks can be transformed into a convex variational problem, which may be very useful from analytical and numerical perspectives. As applications of the convex formulation, we use it to perform numerical simulations (to our knowledge this is the first numerical treatment of urban planning), and we prove a lower bound for the network cost that matches a known upper bound (in terms of how the cost scales in the model parameters) which helps better understand optimal networks and their minimal costs.
Staff Study on Cost and Training Effectiveness of Proposed Training Systems. TAEG Report 1.
ERIC Educational Resources Information Center
Naval Training Equipment Center, Orlando, FL. Training Analysis and Evaluation Group.
A study began the development and initial testing of a method for predicting cost and training effectiveness of proposed training programs. A prototype Training Effectiveness and Cost Effectiveness Prediction (TECEP) model was developed and tested. The model was a method for optimization of training media allocation on the basis of fixed training…
NASA Astrophysics Data System (ADS)
Aguilar, Susanna D.
As a cost effective storage technology for renewable energy sources, Electric Vehicles can be integrated into energy grids. Integration must be optimized to ascertain that renewable energy is available through storage when demand exists so that cost of electricity is minimized. Optimization models can address economic risks associated with the EV supply chain- particularly the volatility in availability and cost of critical materials used in the manufacturing of EV motors and batteries. Supply chain risk can reflect itself in a shortage of storage, which can increase the price of electricity. We propose a micro-and macroeconomic framework for managing supply chain risk through utilization of a cost optimization model in combination with risk management strategies at the microeconomic and macroeconomic level. The study demonstrates how risk from the EVs vehicle critical material supply chain affects manufacturers, smart grid performance, and energy markets qualitatively and quantitatively. Our results illustrate how risk in the EV supply chain affects EV availability and the cost of ancillary services, and how EV critical material supply chain risk can be mitigated through managerial strategies and policy.
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2011-09-01
A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore, the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship, suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.
NASA Astrophysics Data System (ADS)
Handford, Matthew L.; Srinivasan, Manoj
2016-02-01
Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.
NASA Astrophysics Data System (ADS)
Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan
2017-05-01
In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.
Retrospective Cost Adaptive Control with Concurrent Closed-Loop Identification
NASA Astrophysics Data System (ADS)
Sobolic, Frantisek M.
Retrospective cost adaptive control (RCAC) is a discrete-time direct adaptive control algorithm for stabilization, command following, and disturbance rejection. RCAC is known to work on systems given minimal modeling information which is the leading numerator coefficient and any nonminimum-phase (NMP) zeros of the plant transfer function. This information is normally needed a priori and is key in the development of the filter, also known as the target model, within the retrospective performance variable. A novel approach to alleviate the need for prior modeling of both the leading coefficient of the plant transfer function as well as any NMP zeros is developed. The extension to the RCAC algorithm is the use of concurrent optimization of both the target model and the controller coefficients. Concurrent optimization of the target model and controller coefficients is a quadratic optimization problem in the target model and controller coefficients separately. However, this optimization problem is not convex as a joint function of both variables, and therefore nonconvex optimization methods are needed. Finally, insights within RCAC that include intercalated injection between the controller numerator and the denominator, unveil the workings of RCAC fitting a specific closed-loop transfer function to the target model. We exploit this interpretation by investigating several closed-loop identification architectures in order to extract this information for use in the target model.
Simulation and Modeling Capability for Standard Modular Hydropower Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Kevin M.; Smith, Brennan T.; Witt, Adam M.
Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.
Optimization of Location-Routing Problem for Cold Chain Logistics Considering Carbon Footprint.
Wang, Songyi; Tao, Fengming; Shi, Yuhe
2018-01-06
In order to solve the optimization problem of logistics distribution system for fresh food, this paper provides a low-carbon and environmental protection point of view, based on the characteristics of perishable products, and combines with the overall optimization idea of cold chain logistics distribution network, where the green and low-carbon location-routing problem (LRP) model in cold chain logistics is developed with the minimum total costs as the objective function, which includes carbon emission costs. A hybrid genetic algorithm with heuristic rules is designed to solve the model, and an example is used to verify the effectiveness of the algorithm. Furthermore, the simulation results obtained by a practical numerical example show the applicability of the model while provide green and environmentally friendly location-distribution schemes for the cold chain logistics enterprise. Finally, carbon tax policies are introduced to analyze the impact of carbon tax on the total costs and carbon emissions, which proves that carbon tax policy can effectively reduce carbon dioxide emissions in cold chain logistics network.
Layer-switching cost and optimality in information spreading on multiplex networks
Min, Byungjoon; Gwak, Sang-Hwan; Lee, Nanoom; Goh, K. -I.
2016-01-01
We study a model of information spreading on multiplex networks, in which agents interact through multiple interaction channels (layers), say online vs. offline communication layers, subject to layer-switching cost for transmissions across different interaction layers. The model is characterized by the layer-wise path-dependent transmissibility over a contact, that is dynamically determined dependently on both incoming and outgoing transmission layers. We formulate an analytical framework to deal with such path-dependent transmissibility and demonstrate the nontrivial interplay between the multiplexity and spreading dynamics, including optimality. It is shown that the epidemic threshold and prevalence respond to the layer-switching cost non-monotonically and that the optimal conditions can change in abrupt non-analytic ways, depending also on the densities of network layers and the type of seed infections. Our results elucidate the essential role of multiplexity that its explicit consideration should be crucial for realistic modeling and prediction of spreading phenomena on multiplex social networks in an era of ever-diversifying social interaction layers. PMID:26887527
Cost optimization in low volume VLSI circuits
NASA Technical Reports Server (NTRS)
Cook, K. B., Jr.; Kerns, D. V., Jr.
1982-01-01
The relationship of integrated circuit (IC) cost to electronic system cost is developed using models for integrated circuit cost which are based on design/fabrication approach. Emphasis is on understanding the relationship between cost and volume for custom circuits suitable for NASA applications. In this report, reliability is a major consideration in the models developed. Results are given for several typical IC designs using off the shelf, full custom, and semicustom IC's with single and double level metallization.
NASA Astrophysics Data System (ADS)
Palanivel, M.; Uthayakumar, R.
2015-07-01
This paper deals with an economic order quantity (EOQ) model for non-instantaneous deteriorating items with price and advertisement dependent demand pattern under the effect of inflation and time value of money over a finite planning horizon. In this model, shortages are allowed and partially backlogged. The backlogging rate is dependent on the waiting time for the next replenishment. This paper aids the retailer in minimising the total inventory cost by finding the optimal interval and the optimal order quantity. An algorithm is designed to find the optimum solution of the proposed model. Numerical examples are given to demonstrate the results. Also, the effect of changes in the different parameters on the optimal total cost is graphically presented and the implications are discussed in detail.
OPTIMIZATION OF INTEGRATED URBAN WET-WEATHER CONTROL STRATEGIES
An optimization method for urban wet weather control (WWC) strategies is presented. The developed optimization model can be used to determine the most cost-effective strategies for the combination of centralized storage-release systems and distributed on-site WWC alternatives. T...
HOMER: The hybrid optimization model for electric renewable
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lilienthal, P.; Flowers, L.; Rossmann, C.
1995-12-31
Hybrid renewable systems are often more cost-effective than grid extensions or isolated diesel generators for providing power to remote villages. There are a wide variety of hybrid systems being developed for village applications that have differing combinations of wind, photovoltaics, batteries, and diesel generators. Due to variations in loads and resources determining the most appropriate combination of these components for a particular village is a difficult modelling task. To address this design problem the National Renewable Energy Laboratory has developed the Hybrid Optimization Model for Electric Renewables (HOMER). Existing models are either too detailed for screening analysis or too simplemore » for reliable estimation of performance. HOMER is a design optimization model that determines the configuration, dispatch, and load management strategy that minimizes life-cycle costs for a particular site and application. This paper describes the HOMER methodology and presents representative results.« less
NASA Astrophysics Data System (ADS)
Chen, Zhiming; Feng, Yuncheng
1988-08-01
This paper describes an algorithmic structure for combining simulation and optimization techniques both in theory and practice. Response surface methodology is used to optimize the decision variables in the simulation environment. A simulation-optimization software has been developed and successfully implemented, and its application to an aggregate production planning simulation-optimization model is reported. The model's objective is to minimize the production cost and to generate an optimal production plan and inventory control strategy for an aircraft factory.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
Optimization Under Uncertainty of Site-Specific Turbine Configurations
NASA Astrophysics Data System (ADS)
Quick, J.; Dykes, K.; Graf, P.; Zahle, F.
2016-09-01
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained with increasing risk aversion on the part of the designer.
NASA Astrophysics Data System (ADS)
Khalilpourazari, Soheyl; Khalilpourazary, Saman
2017-05-01
In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.
NASA Astrophysics Data System (ADS)
Sanaye, Sepehr; Katebi, Arash
2014-02-01
Energy, exergy, economic and environmental (4E) analysis and optimization of a hybrid solid oxide fuel cell and micro gas turbine (SOFC-MGT) system for use as combined generation of heat and power (CHP) is investigated in this paper. The hybrid system is modeled and performance related results are validated using available data in literature. Then a multi-objective optimization approach based on genetic algorithm is incorporated. Eight system design parameters are selected for the optimization procedure. System exergy efficiency and total cost rate (including capital or investment cost, operational cost and penalty cost of environmental emissions) are the two objectives. The effects of fuel unit cost, capital investment and system power output on optimum design parameters are also investigated. It is observed that the most sensitive and important design parameter in the hybrid system is fuel cell current density which has a significant effect on the balance between system cost and efficiency. The selected design point from the Pareto distribution of optimization results indicates a total system exergy efficiency of 60.7%, with estimated electrical energy cost 0.057 kW-1 h-1, and payback period of about 6.3 years for the investment.
A pragmatic decision model for inventory management with heterogeneous suppliers
NASA Astrophysics Data System (ADS)
Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa
2018-05-01
For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.
Energy-saving management modelling and optimization for lead-acid battery formation process
NASA Astrophysics Data System (ADS)
Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.
2017-11-01
In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
Accurate position estimation methods based on electrical impedance tomography measurements
NASA Astrophysics Data System (ADS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.
How to develop renewable power in China? A cost-effective perspective.
Cong, Rong-Gang; Shen, Shaochuan
2014-01-01
To address the problems of climate change and energy security, Chinese government strived to develop renewable power as an important alternative of conventional electricity. In this paper, the learning curve model is employed to describe the decreasing unit investment cost due to accumulated installed capacity; the technology diffusion model is used to analyze the potential of renewable power. Combined with the investment cost, the technology potential, and scenario analysis of China social development in the future, we develop the Renewable Power Optimization Model (RPOM) to analyze the optimal development paths of three sources of renewable power from 2009 to 2020 in a cost-effective way. Results show that (1) the optimal accumulated installed capacities of wind power, solar power, and biomass power will reach 169000, 20000, and 30000 MW in 2020; (2) the developments of renewable power show the intermittent feature; (3) the unit investment costs of wind power, solar power, and biomass power will be 4500, 11500, and 5700 Yuan/KW in 2020; (4) the discounting effect dominates the learning curve effect for solar and biomass powers; (5) the rise of on-grid ratio of renewable power will first promote the development of wind power and then solar power and biomass power.
Reactive power planning under high penetration of wind energy using Benders decomposition
Xu, Yan; Wei, Yanli; Fang, Xin; ...
2015-11-05
This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less
How to Develop Renewable Power in China? A Cost-Effective Perspective
2014-01-01
To address the problems of climate change and energy security, Chinese government strived to develop renewable power as an important alternative of conventional electricity. In this paper, the learning curve model is employed to describe the decreasing unit investment cost due to accumulated installed capacity; the technology diffusion model is used to analyze the potential of renewable power. Combined with the investment cost, the technology potential, and scenario analysis of China social development in the future, we develop the Renewable Power Optimization Model (RPOM) to analyze the optimal development paths of three sources of renewable power from 2009 to 2020 in a cost-effective way. Results show that (1) the optimal accumulated installed capacities of wind power, solar power, and biomass power will reach 169000, 20000, and 30000 MW in 2020; (2) the developments of renewable power show the intermittent feature; (3) the unit investment costs of wind power, solar power, and biomass power will be 4500, 11500, and 5700 Yuan/KW in 2020; (4) the discounting effect dominates the learning curve effect for solar and biomass powers; (5) the rise of on-grid ratio of renewable power will first promote the development of wind power and then solar power and biomass power. PMID:24578672
Seethapathi, Nidhi; Srinivasan, Manoj
2015-09-01
Humans do not generally walk at constant speed, except perhaps on a treadmill. Normal walking involves starting, stopping and changing speeds, in addition to roughly steady locomotion. Here, we measure the metabolic energy cost of walking when changing speed. Subjects (healthy adults) walked with oscillating speeds on a constant-speed treadmill, alternating between walking slower and faster than the treadmill belt, moving back and forth in the laboratory frame. The metabolic rate for oscillating-speed walking was significantly higher than that for constant-speed walking (6-20% cost increase for ±0.13-0.27 m s(-1) speed fluctuations). The metabolic rate increase was correlated with two models: a model based on kinetic energy fluctuations and an inverted pendulum walking model, optimized for oscillating-speed constraints. The cost of changing speeds may have behavioural implications: we predicted that the energy-optimal walking speed is lower for shorter distances. We measured preferred human walking speeds for different walking distances and found people preferred lower walking speeds for shorter distances as predicted. Further, analysing published daily walking-bout distributions, we estimate that the cost of changing speeds is 4-8% of daily walking energy budget. © 2015 The Author(s).
Seethapathi, Nidhi; Srinivasan, Manoj
2015-01-01
Humans do not generally walk at constant speed, except perhaps on a treadmill. Normal walking involves starting, stopping and changing speeds, in addition to roughly steady locomotion. Here, we measure the metabolic energy cost of walking when changing speed. Subjects (healthy adults) walked with oscillating speeds on a constant-speed treadmill, alternating between walking slower and faster than the treadmill belt, moving back and forth in the laboratory frame. The metabolic rate for oscillating-speed walking was significantly higher than that for constant-speed walking (6–20% cost increase for ±0.13–0.27 m s−1 speed fluctuations). The metabolic rate increase was correlated with two models: a model based on kinetic energy fluctuations and an inverted pendulum walking model, optimized for oscillating-speed constraints. The cost of changing speeds may have behavioural implications: we predicted that the energy-optimal walking speed is lower for shorter distances. We measured preferred human walking speeds for different walking distances and found people preferred lower walking speeds for shorter distances as predicted. Further, analysing published daily walking-bout distributions, we estimate that the cost of changing speeds is 4–8% of daily walking energy budget. PMID:26382072
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kistler, B.L.
DELSOL3 is a revised and updated version of the DELSOL2 computer program (SAND81-8237) for calculating collector field performance and layout and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design based on energy cost. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and externalmore » cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. DELSOL3 maintains the advantages of speed and accuracy which are characteristics of DELSOL2.« less
Optimal waste-to-energy strategy assisted by GIS For sustainable solid waste management
NASA Astrophysics Data System (ADS)
Tan, S. T.; Hashim, H.
2014-02-01
Municipal solid waste (MSW) management has become more complex and costly with the rapid socio-economic development and increased volume of waste. Planning a sustainable regional waste management strategy is a critical step for the decision maker. There is a great potential for MSW to be used for the generation of renewable energy through waste incineration or landfilling with gas capture system. However, due to high processing cost and cost of resource transportation and distribution throughout the waste collection station and power plant, MSW is mostly disposed in the landfill. This paper presents an optimization model incorporated with GIS data inputs for MSW management. The model can design the multi-period waste-to-energy (WTE) strategy to illustrate the economic potential and tradeoffs for MSW management under different scenarios. The model is capable of predicting the optimal generation, capacity, type of WTE conversion technology and location for the operation and construction of new WTE power plants to satisfy the increased energy demand by 2025 in the most profitable way. Iskandar Malaysia region was chosen as the model city for this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lilienthal, P.
1997-12-01
This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin
2013-04-01
Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such functionality is not typically included in other water DSS. Based on the resulting water resources allocation, the model calculates operating and water scarcity costs caused by supply deficits based on economic demand functions for each demand node. The optimization model allocates the available resource over time based on economic criteria (net benefits from demand curves and cost functions), minimizing the total water scarcity and operating cost of water use. This approach provides solutions that optimize the economic efficiency (as total net benefit) in water resources management over the optimization period. Both models must be used together in water resource planning and management. The optimization model provides an initial insight on economically efficient solutions, from which different operating rules can be further developed and tested using the simulation model. The hydro-economic simulation model allows assessing economic impacts of alternative policies or operating criteria, avoiding the perfect foresight issues associated with the optimization. The tools have been applied to the Jucar river basin (Spain) in order to assess the economic results corresponding to the current modus operandi of the system and compare them with the solution from the optimization that maximizes economic efficiency. Acknowledgments: The study has been partially supported by the European Community 7th Framework Project (GENESIS project, n. 226536) and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
Optimization Scheduling Model for Wind-thermal Power System Considering the Dynamic penalty factor
NASA Astrophysics Data System (ADS)
PENG, Siyu; LUO, Jianchun; WANG, Yunyu; YANG, Jun; RAN, Hong; PENG, Xiaodong; HUANG, Ming; LIU, Wanyu
2018-03-01
In this paper, a new dynamic economic dispatch model for power system is presented.Objective function of the proposed model presents a major novelty in the dynamic economic dispatch including wind farm: introduced the “Dynamic penalty factor”, This factor could be computed by using fuzzy logic considering both the variable nature of active wind power and power demand, and it could change the wind curtailment cost according to the different state of the power system. Case studies were carried out on the IEEE30 system. Results show that the proposed optimization model could mitigate the wind curtailment and the total cost effectively, demonstrate the validity and effectiveness of the proposed model.
Sato, Katsufumi; Shiomi, Kozue; Watanabe, Yuuki; Watanuki, Yutaka; Takahashi, Akinori; Ponganis, Paul J.
2010-01-01
It has been predicted that geometrically similar animals would swim at the same speed with stroke frequency scaling with mass−1/3. In the present study, morphological and behavioural data obtained from free-ranging penguins (seven species) were compared. Morphological measurements support the geometrical similarity. However, cruising speeds of 1.8–2.3 m s−1 were significantly related to mass0.08 and stroke frequencies were proportional to mass−0.29. These scaling relationships do not agree with the previous predictions for geometrically similar animals. We propose a theoretical model, considering metabolic cost, work against mechanical forces (drag and buoyancy), pitch angle and dive depth. This new model predicts that: (i) the optimal swim speed, which minimizes the energy cost of transport, is proportional to (basal metabolic rate/drag)1/3 independent of buoyancy, pitch angle and dive depth; (ii) the optimal speed is related to mass0.05; and (iii) stroke frequency is proportional to mass−0.28. The observed scaling relationships of penguins support these predictions, which suggest that breath-hold divers swam optimally to minimize the cost of transport, including mechanical and metabolic energy during dive. PMID:19906666
Application of the Software as a Service Model to the Control of Complex Building Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stadler, Michael; Donadee, Jonathan; Marnay, Chris
2011-03-17
In an effort to create broad access to its optimization software, Lawrence Berkeley National Laboratory (LBNL), in collaboration with the University of California at Davis (UC Davis) and OSISoft, has recently developed a Software as a Service (SaaS) Model for reducing energy costs, cutting peak power demand, and reducing carbon emissions for multipurpose buildings. UC Davis currently collects and stores energy usage data from buildings on its campus. Researchers at LBNL sought to demonstrate that a SaaS application architecture could be built on top of this data system to optimize the scheduling of electricity and heat delivery in the building.more » The SaaS interface, known as WebOpt, consists of two major parts: a) the investment& planning and b) the operations module, which builds on the investment& planning module. The operational scheduling and load shifting optimization models within the operations module use data from load prediction and electrical grid emissions models to create an optimal operating schedule for the next week, reducing peak electricity consumption while maintaining quality of energy services. LBNL's application also provides facility managers with suggested energy infrastructure investments for achieving their energy cost and emission goals based on historical data collected with OSISoft's system. This paper describes these models as well as the SaaS architecture employed by LBNL researchers to provide asset scheduling services to UC Davis. The peak demand, emissions, and cost implications of the asset operation schedule and investments suggested by this optimization model are analysed.« less
Application of the Software as a Service Model to the Control of Complex Building Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stadler, Michael; Donadee, Jon; Marnay, Chris
2011-03-18
In an effort to create broad access to its optimization software, Lawrence Berkeley National Laboratory (LBNL), in collaboration with the University of California at Davis (UC Davis) and OSISoft, has recently developed a Software as a Service (SaaS) Model for reducing energy costs, cutting peak power demand, and reducing carbon emissions for multipurpose buildings. UC Davis currently collects and stores energy usage data from buildings on its campus. Researchers at LBNL sought to demonstrate that a SaaS application architecture could be built on top of this data system to optimize the scheduling of electricity and heat delivery in the building.more » The SaaS interface, known as WebOpt, consists of two major parts: a) the investment& planning and b) the operations module, which builds on the investment& planning module. The operational scheduling and load shifting optimization models within the operations module use data from load prediction and electrical grid emissions models to create an optimal operating schedule for the next week, reducing peak electricity consumption while maintaining quality of energy services. LBNL's application also provides facility managers with suggested energy infrastructure investments for achieving their energy cost and emission goals based on historical data collected with OSISoft's system. This paper describes these models as well as the SaaS architecture employed by LBNL researchers to provide asset scheduling services to UC Davis. The peak demand, emissions, and cost implications of the asset operation schedule and investments suggested by this optimization model are analyzed.« less
Accounting for enforcement costs in the spatial allocation of marine zones.
Davis, Katrina; Kragt, Marit; Gelcich, Stefan; Schilizzi, Steven; Pannell, David
2015-02-01
Marine fish stocks are in many cases extracted above sustainable levels, but they may be protected through restricted-use zoning systems. The effectiveness of these systems typically depends on support from coastal fishing communities. High management costs including those of enforcement may, however, deter fishers from supporting marine management. We incorporated enforcement costs into a spatial optimization model that identified how conservation targets can be met while maximizing fishers' revenue. Our model identified the optimal allocation of the study area among different zones: no-take, territorial user rights for fisheries (TURFs), or open access. The analysis demonstrated that enforcing no-take and TURF zones incurs a cost, but results in higher species abundance by preventing poaching and overfishing. We analyzed how different enforcement scenarios affected fishers' revenue. Fisher revenue was approximately 50% higher when territorial user rights were enforced than when they were not. The model preferentially allocated area to the enforced-TURF zone over other zones, demonstrating that the financial benefits of enforcement (derived from higher species abundance) exceeded the costs. These findings were robust to increases in enforcement costs but sensitive to changes in species' market price. We also found that revenue under the existing zoning regime in the study area was 13-30% lower than under an optimal solution. Our results highlight the importance of accounting for both the benefits and costs of enforcement in marine conservation, particularly when incurred by fishers. © 2014 Society for Conservation Biology.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer.
An Optimization Model For Strategy Decision Support to Select Kind of CPO’s Ship
NASA Astrophysics Data System (ADS)
Suaibah Nst, Siti; Nababan, Esther; Mawengkang, Herman
2018-01-01
The selection of marine transport for the distribution of crude palm oil (CPO) is one of strategy that can be considered in reducing cost of transport. The cost of CPO’s transport from one area to CPO’s factory located at the port of destination may affect the level of CPO’s prices and the number of demands. In order to maintain the availability of CPO a strategy is required to minimize the cost of transporting. In this study, the strategy used to select kind of charter ships as barge or chemical tanker. This study aims to determine an optimization model for strategy decision support in selecting kind of CPO’s ship by minimizing costs of transport. The select of ship was done randomly, so that two-stage stochastic programming model was used to select the kind of ship. Model can help decision makers to select either barge or chemical tanker to distribute CPO.
Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429
Optimal design of upstream processes in biotransformation technologies.
Dheskali, Endrit; Michailidi, Katerina; de Castro, Aline Machado; Koutinas, Apostolis A; Kookos, Ioannis K
2017-01-01
In this work a mathematical programming model for the optimal design of the bioreaction section of biotechnological processes is presented. Equations for the estimation of the equipment cost derived from a recent publication by the US National Renewable Energy Laboratory (NREL) are also summarized. The cost-optimal design of process units and the optimal scheduling of their operation can be obtained using the proposed formulation that has been implemented in software available from the journal web page or the corresponding author. The proposed optimization model can be used to quantify the effects of decisions taken at a lab scale on the industrial scale process economics. It is of paramount important to note that this can be achieved at the early stage of the development of a biotechnological project. Two case studies are presented that demonstrate the usefulness and potential of the proposed methodology. Copyright © 2016. Published by Elsevier Ltd.
The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.
Muller, A; Pontonnier, C; Dumont, G
2018-02-01
The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.
An optimization framework for workplace charging strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yongxi; Zhou, Yan
2015-03-01
The workplace charging (WPC) has been recently recognized as the most important secondary charging point next to residential charging for plug-in electric vehicles (PEVs). The current WPC practice is spontaneous and grants every PEV a designated charger, which may not be practical or economic when there are a large number of PEVs present at workplace. This study is the first research undertaken that develops an optimization framework for WPC strategies to satisfy all charging demand while explicitly addressing different eligible levels of charging technology and employees’ demographic distributions. The optimization model is to minimize the lifetime cost of equipment, installations,more » and operations, and is formulated as an integer program. We demonstrate the applicability of the model using numerical examples based on national average data. The results indicate that the proposed optimization model can reduce the total cost of running a WPC system by up to 70% compared to the current practice. The WPC strategies are sensitive to the time windows and installation costs, and dominated by the PEV population size. The WPC has also been identified as an alternative sustainable transportation program to the public transit subsidy programs for both economic and environmental advantages.« less
NASA Astrophysics Data System (ADS)
Lamdjaya, T.; Jobiliong, E.
2017-01-01
PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.
SimWIND: A Geospatial Infrastructure Model for Wind Energy Production and Transmission
NASA Astrophysics Data System (ADS)
Middleton, R. S.; Phillips, B. R.; Bielicki, J. M.
2009-12-01
Wind is a clean, enduring energy resource with a capacity to satisfy 20% or more of the electricity needs in the United States. A chief obstacle to realizing this potential is the general paucity of electrical transmission lines between promising wind resources and primary load centers. Successful exploitation of this resource will therefore require carefully planned enhancements to the electric grid. To this end, we present the model SimWIND for self-consistent optimization of the geospatial arrangement and cost of wind energy production and transmission infrastructure. Given a set of wind farm sites that satisfy meteorological viability and stakeholder interest, our model simultaneously determines where and how much electricity to produce, where to build new transmission infrastructure and with what capacity, and where to use existing infrastructure in order to minimize the cost for delivering a given amount of electricity to key markets. Costs and routing of transmission line construction take into account geographic and social factors, as well as connection and delivery expenses (transformers, substations, etc.). We apply our model to Texas and consider how findings complement the 2008 Electric Reliability Council of Texas (ERCOT) Competitive Renewable Energy Zones (CREZ) Transmission Optimization Study. Results suggest that integrated optimization of wind energy infrastructure and cost using SimWIND could play a critical role in wind energy planning efforts.
NASA Astrophysics Data System (ADS)
Kristiana, S. P. D.
2017-12-01
Corporate chain store is one type of retail industries companies that are developing growing rapidly in Indonesia. The competition between retail companies is very tight, so retailer companies should evaluate its performance continuously in order to survive. The selling price of products is one of the essential attributes and gets attention of many consumers where it’s used to evaluate the performance of the industry. This research aimed to determine optimal selling price of product with considering cost factors, namely purchase price of the product from supplier, holding costs, and transportation costs. Fuzzy logic approach is used in data processing with MATLAB software. Fuzzy logic is selected to solve the problem because this method can consider complexities factors. The result is a model of determination of the optimal selling price by considering three cost factors as inputs in the model. Calculating MAPE and model prediction ability for some products are used as validation and verification where the average value is 0.0525 for MAPE and 94.75% for prediction ability. The conclusion is this model can predict the selling price of up to 94.75%, so it can be used as tools for the corporate chain store in particular to determine the optimal selling price for its products.
Optimizing and controlling earthmoving operations using spatial technologies
NASA Astrophysics Data System (ADS)
Alshibani, Adel
This thesis presents a model designed for optimizing, tracking, and controlling earthmoving operations. The proposed model utilizes, Genetic Algorithm (GA), Linear Programming (LP), and spatial technologies including Global Positioning Systems (GPS) and Geographic Information Systems (GIS) to support the management functions of the developed model. The model assists engineers and contractors in selecting near optimum crew formations in planning phase and during construction, using GA and LP supported by the Pathfinder Algorithm developed in a GIS environment. GA is used in conjunction with a set of rules developed to accelerate the optimization process and to avoid generating and evaluating hypothetical and unrealistic crew formations. LP is used to determine quantities of earth to be moved from different borrow pits and to be placed at different landfill sites to meet project constraints and to minimize the cost of these earthmoving operations. On the one hand, GPS is used for onsite data collection and for tracking construction equipment in near real-time. On the other hand, GIS is employed to automate data acquisition and to analyze the collected spatial data. The model is also capable of reconfiguring crew formations dynamically during the construction phase while site operations are in progress. The optimization of the crew formation considers: (1) construction time, (2) construction direct cost, or (3) construction total cost. The model is also capable of generating crew formations to meet, as close as possible, specified time and/or cost constraints. In addition, the model supports tracking and reporting of project progress utilizing the earned-value concept and the project ratio method with modifications that allow for more accurate forecasting of project time and cost at set future dates and at completion. The model is capable of generating graphical and tabular reports. The developed model has been implemented in prototype software, using Object-Oriented Programming, Microsoft Foundation Classes (MFC), and has been coded using visual C++ V.6. Microsoft Access is employed as database management system. The developed software operates in Microsoft windows' environment. Three example applications were analyzed to validate the development made and to illustrate the essential features of the developed model.
An Energy Integrated Dispatching Strategy of Multi- energy Based on Energy Internet
NASA Astrophysics Data System (ADS)
Jin, Weixia; Han, Jun
2018-01-01
Energy internet is a new way of energy use. Energy internet achieves energy efficiency and low cost by scheduling a variety of different forms of energy. Particle Swarm Optimization (PSO) is an advanced algorithm with few parameters, high computational precision and fast convergence speed. By improving the parameters ω, c1 and c2, PSO can improve the convergence speed and calculation accuracy. The objective of optimizing model is lowest cost of fuel, which can meet the load of electricity, heat and cold after all the renewable energy is received. Due to the different energy structure and price in different regions, the optimization strategy needs to be determined according to the algorithm and model.
Optimal control of information epidemics modeled as Maki Thompson rumors
NASA Astrophysics Data System (ADS)
Kandhway, Kundan; Kuri, Joy
2014-12-01
We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.
Biobjective planning of GEO debris removal mission with multiple servicing spacecrafts
NASA Astrophysics Data System (ADS)
Jing, Yu; Chen, Xiao-qian; Chen, Li-hu
2014-12-01
The mission planning of GEO debris removal with multiple servicing spacecrafts (SScs) is studied in this paper. Specifically, the SScs are considered to be initially on the GEO belt, and they should rendezvous with debris of different orbital slots and different inclinations, remove them to the graveyard orbit and finally return to their initial locations. Three key problems should be resolved here: task assignment, mission sequence planning and transfer trajectory optimization for each SSc. The minimum-cost, two-impulse phasing maneuver is used for each rendezvous. The objective is to find a set of optimal planning schemes with minimum fuel cost and travel duration. Considering this mission as a hybrid optimal control problem, a mathematical model is proposed. A modified multi-objective particle swarm optimization is employed to address the model. Numerous examples are carried out to demonstrate the effectiveness of the model and solution method. In this paper, single-SSc and multiple-SSc scenarios with the same amount of fuel are compared. Numerous experiments indicate that for a definite GEO debris removal mission, that which alternative (single-SSc or multiple-SSc) is better (cost less fuel and consume less travel time) is determined by many factors. Although in some cases, multiple-SSc scenarios may perform worse than single-SSc scenarios, the extra costs are considered worth the gain in mission safety and robustness.
Demand side management in recycling and electricity retail pricing
NASA Astrophysics Data System (ADS)
Kazan, Osman
This dissertation addresses several problems from the recycling industry and electricity retail market. The first paper addresses a real-life scheduling problem faced by a national industrial recycling company. Based on their practices, a scheduling problem is defined, modeled, analyzed, and a solution is approximated efficiently. The recommended application is tested on the real-life data and randomly generated data. The scheduling improvements and the financial benefits are presented. The second problem is from electricity retail market. There are well-known patterns in daily usage in hours. These patterns change in shape and magnitude by seasons and days of the week. Generation costs are multiple times higher during the peak hours of the day. Yet most consumers purchase electricity at flat rates. This work explores analytic pricing tools to reduce peak load electricity demand for retailers. For that purpose, a nonlinear model that determines optimal hourly prices is established based on two major components: unit generation costs and consumers' utility. Both are analyzed and estimated empirically in the third paper. A pricing model is introduced to maximize the electric retailer's profit. As a result, a closed-form expression for the optimal price vector is obtained. Possible scenarios are evaluated for consumers' utility distribution. For the general case, we provide a numerical solution methodology to obtain the optimal pricing scheme. The models recommended are tested under various scenarios that consider consumer segmentation and multiple pricing policies. The recommended model reduces the peak load significantly in most cases. Several utility companies offer hourly pricing to their customers. They determine prices using historical data of unit electricity cost over time. In this dissertation we develop a nonlinear model that determines optimal hourly prices with parameter estimation. The last paper includes a regression analysis of the unit generation cost function obtained from Independent Service Operators. A consumer experiment is established to replicate the peak load behavior. As a result, consumers' utility function is estimated and optimal retail electricity prices are computed.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510
Design and analysis of electricity markets
NASA Astrophysics Data System (ADS)
Sioshansi, Ramteen Mehr
Restructured competitive electricity markets rely on designing market-based mechanisms which can efficiently coordinate the power system and minimize the exercise of market power. This dissertation is a series of essays which develop and analyze models of restructured electricity markets. Chapter 2 studies the incentive properties of a co-optimized market for energy and reserves that pays reserved generators their implied opportunity cost---which is the difference between their stated energy cost and the market-clearing price for energy. By analyzing the market as a competitive direct revelation mechanism we examine the properties of efficient equilibria and demonstrate that generators have incentives to shade their stated costs below actual costs. We further demonstrate that the expected energy payments of our mechanism is less than that in a disjoint market for energy only. Chapter 3 is an empirical validation of a supply function equilibrium (SFE) model. By comparing theoretically optimal supply functions and actual generation offers into the Texas spot balancing market, we show the SFE to fit the actual behavior of the largest generators in market. This not only serves to validate the model, but also demonstrates the extent to which firms exercise market power. Chapters 4 and 5 examine equity, incentive, and efficiency issues in the design of non-convex commitment auctions. We demonstrate that different near-optimal solutions to a central unit commitment problem which have similar-sized optimality gaps will generally yield vastly different energy prices and payoffs to individual generators. Although solving the mixed integer program to optimality will overcome such issues, we show that this relies on achieving optimality of the commitment---which may not be tractable for large-scale problems within the allotted timeframe. We then simulate and compare a competitive benchmark for a market with centralized and self commitment in order to bound the efficiency losses stemming from coordination losses (cost of anarchy) in a decentralized market.
NASA Astrophysics Data System (ADS)
Moghaddam, Kamran S.; Usher, John S.
2011-07-01
In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
Biswas, Santanu; Subramanian, Abhishek; ELMojtaba, Ibrahim M; Chattopadhyay, Joydev; Sarkar, Ram Rup
2017-01-01
Visceral leishmaniasis (VL) is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis.
Optimal control of malaria: combining vector interventions and drug therapies.
Khamis, Doran; El Mouden, Claire; Kura, Klodeta; Bonsall, Michael B
2018-04-24
The sterile insect technique and transgenic equivalents are considered promising tools for controlling vector-borne disease in an age of increasing insecticide and drug-resistance. Combining vector interventions with artemisinin-based therapies may achieve the twin goals of suppressing malaria endemicity while managing artemisinin resistance. While the cost-effectiveness of these controls has been investigated independently, their combined usage has not been dynamically optimized in response to ecological and epidemiological processes. An optimal control framework based on coupled models of mosquito population dynamics and malaria epidemiology is used to investigate the cost-effectiveness of combining vector control with drug therapies in homogeneous environments with and without vector migration. The costs of endemic malaria are weighed against the costs of administering artemisinin therapies and releasing modified mosquitoes using various cost structures. Larval density dependence is shown to reduce the cost-effectiveness of conventional sterile insect releases compared with transgenic mosquitoes with a late-acting lethal gene. Using drug treatments can reduce the critical vector control release ratio necessary to cause disease fadeout. Combining vector control and drug therapies is the most effective and efficient use of resources, and using optimized implementation strategies can substantially reduce costs.
Jain, Vivek; Chang, Wei; Byonanebye, Dathan M; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R; Havlir, Diane V; Kahn, James G
2015-01-01
Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451-716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100-200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. In a Ugandan HIV clinic, ART delivery costs--including VL testing--for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions.
Fairness in optimizing bus-crew scheduling process.
Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei
2017-01-01
This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.
Póvoa, P; Oehmen, A; Inocêncio, P; Matos, J S; Frazão, A
2017-05-01
The main objective of this paper is to demonstrate the importance of applying dynamic modelling and real energy prices on a full scale water resource recovery facility (WRRF) for the evaluation of control strategies in terms of energy costs with aeration. The Activated Sludge Model No. 1 (ASM1) was coupled with real energy pricing and a power consumption model and applied as a dynamic simulation case study. The model calibration is based on the STOWA protocol. The case study investigates the importance of providing real energy pricing comparing (i) real energy pricing, (ii) weighted arithmetic mean energy pricing and (iii) arithmetic mean energy pricing. The operational strategies evaluated were (i) old versus new air diffusers, (ii) different DO set-points and (iii) implementation of a carbon removal controller based on nitrate sensor readings. The application in a full scale WRRF of the ASM1 model coupled with real energy costs was successful. Dynamic modelling with real energy pricing instead of constant energy pricing enables the wastewater utility to optimize energy consumption according to the real energy price structure. Specific energy cost allows the identification of time periods with potential for linking WRRF with the electric grid to optimize the treatment costs, satisfying operational goals.
NASA Astrophysics Data System (ADS)
Osman, Ayat E.
Energy use in commercial buildings constitutes a major proportion of the energy consumption and anthropogenic emissions in the USA. Cogeneration systems offer an opportunity to meet a building's electrical and thermal demands from a single energy source. To answer the question of what is the most beneficial and cost effective energy source(s) that can be used to meet the energy demands of the building, optimizations techniques have been implemented in some studies to find the optimum energy system based on reducing cost and maximizing revenues. Due to the significant environmental impacts that can result from meeting the energy demands in buildings, building design should incorporate environmental criteria in the decision making criteria. The objective of this research is to develop a framework and model to optimize a building's operation by integrating congregation systems and utility systems in order to meet the electrical, heating, and cooling demand by considering the potential life cycle environmental impact that might result from meeting those demands as well as the economical implications. Two LCA Optimization models have been developed within a framework that uses hourly building energy data, life cycle assessment (LCA), and mixed-integer linear programming (MILP). The objective functions that are used in the formulation of the problems include: (1) Minimizing life cycle primary energy consumption, (2) Minimizing global warming potential, (3) Minimizing tropospheric ozone precursor potential, (4) Minimizing acidification potential, (5) Minimizing NOx, SO 2 and CO2, and (6) Minimizing life cycle costs, considering a study period of ten years and the lifetime of equipment. The two LCA optimization models can be used for: (a) long term planning and operational analysis in buildings by analyzing the hourly energy use of a building during a day and (b) design and quick analysis of building operation based on periodic analysis of energy use of a building in a year. A Pareto-optimal frontier is also derived, which defines the minimum cost required to achieve any level of environmental emission or primary energy usage value or inversely the minimum environmental indicator and primary energy usage value that can be achieved and the cost required to achieve that value.
Kostal, Lubomir; Kobayashi, Ryota
2015-10-01
Information theory quantifies the ultimate limits on reliable information transfer by means of the channel capacity. However, the channel capacity is known to be an asymptotic quantity, assuming unlimited metabolic cost and computational power. We investigate a single-compartment Hodgkin-Huxley type neuronal model under the spike-rate coding scheme and address how the metabolic cost and the decoding complexity affects the optimal information transmission. We find that the sub-threshold stimulation regime, although attaining the smallest capacity, allows for the most efficient balance between the information transmission and the metabolic cost. Furthermore, we determine post-synaptic firing rate histograms that are optimal from the information-theoretic point of view, which enables the comparison of our results with experimental data. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Optimal ordering and production policy for a recoverable item inventory system with learning effect
NASA Astrophysics Data System (ADS)
Tsai, Deng-Maw
2012-02-01
This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.
Structural Design Optimization of Doubly-Fed Induction Generators Using GeneratorSE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, Latha; Fingersh, Lee J; Dykes, Katherine L
2017-11-13
A wind turbine with a larger rotor swept area can generate more electricity, however, this increases costs disproportionately for manufacturing, transportation, and installation. This poster presents analytical models for optimizing doubly-fed induction generators (DFIGs), with the objective of reducing the costs and mass of wind turbine drivetrains. The structural design for the induction machine includes models for the casing, stator, rotor, and high-speed shaft developed within the DFIG module in the National Renewable Energy Laboratory's wind turbine sizing tool, GeneratorSE. The mechanical integrity of the machine is verified by examining stresses, structural deflections, and modal properties. The optimization results aremore » then validated using finite element analysis (FEA). The results suggest that our analytical model correlates with the FEA in some areas, such as radial deflection, differing by less than 20 percent. But the analytical model requires further development for axial deflections, torsional deflections, and stress calculations.« less
NASA Astrophysics Data System (ADS)
Ogallagher, J.; Winston, R.
1987-09-01
Using nonimaging secondary concentrators in point-focus applications may permit the development of more cost-effective concentrator systems by either improving performance or reducing costs. Secondaries may also increase design flexibility. The major objective of this study was to develop as complete an understanding as possible of the quantitative performance and cost effects associated with deploying nonimaging secondary concentrators at the focal zone of point-focus solar thermal concentrators. A performance model was developed that uses a Monte Carlo ray-trace procedure to determine the focal plane distribution of a paraboloidal primary as a function of optical parameters. It then calculates the corresponding optimized concentration and thermal efficiency as a function of temperature with and without the secondary. To examine the potential cost benefits associated with secondaries, a preliminary model for the rational optimization of performance versus cost trade-offs was developed. This model suggests a possible 10 to 20 percent reduction in the cost of delivered energy when secondaries are used. This is a lower limit, and the benefits may even be greater if using a secondary permits the development of inexpensive primary technologies for which the performance would not otherwise be viable.
NASA Astrophysics Data System (ADS)
Zheng, Y.; Wu, B.; Wu, X.
2015-12-01
Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real-world situations, since they can dramatically reduce the computational cost of using IHMs in an iterative model evaluation process. In addition, our studies generated insights into the human-nature water conflicts in the specific study area and suggested potential solutions to address them.
Chen, Z M; Ji, S B; Shi, X L; Zhao, Y Y; Zhang, X F; Jin, H
2017-02-10
Objective: To evaluate the cost-utility of different hepatitis E vaccination strategies in women aged 15 to 49. Methods: The Markov-decision tree model was constructed to evaluate the cost-utility of three hepatitis E virus vaccination strategies. Parameters of the models were estimated on the basis of published studies and experience of experts. Both methods on sensitivity and threshold analysis were used to evaluate the uncertainties of the model. Results: Compared with non-vaccination group, strategy on post-screening vaccination with rate as 100%, could save 0.10 quality-adjusted life years per capital in the women from the societal perspectives. After implementation of screening program and with the vaccination rate reaching 100%, the incremental cost utility ratio (ICUR) of vaccination appeared as 5 651.89 and 6 385.33 Yuan/QALY, respectively. Vaccination post to the implementation of a screening program, the result showed better benefit than the vaccination rate of 100%. Results from the sensitivity analysis showed that both the cost of hepatitis E vaccine and the inoculation compliance rate presented significant effects. If the cost were lower than 191.56 Yuan (RMB) or the inoculation compliance rate lower than 0.23, the vaccination rate of 100% strategy was better than the post-screening vaccination strategy, otherwise the post-screening vaccination strategy appeared the optimal strategy. Conclusion: Post-screening vaccination for women aged 15 to 49 from social perspectives seemed the optimal one but it had to depend on the change of vaccine cost and the rate of inoculation compliance.
Coyle, Doug; Ko, Yoo-Joung; Coyle, Kathryn; Saluja, Ronak; Shah, Keya; Lien, Kelly; Lam, Henry; Chan, Kelvin K W
2017-04-01
To assess the cost-effectiveness of gemcitabine (G), G + 5-fluorouracil, G + capecitabine, G + cisplatin, G + oxaliplatin, G + erlotinib, G + nab-paclitaxel (GnP), and FOLFIRINOX in the treatment of advanced pancreatic cancer from a Canadian public health payer's perspective, using data from a recently published Bayesian network meta-analysis. Analysis was conducted through a three-state Markov model and used data on the progression of disease with treatment from the gemcitabine arms of randomized controlled trials combined with estimates from the network meta-analysis for the newer regimens. Estimates of health care costs were obtained from local providers, and utilities were derived from the literature. The model estimates the effect of treatment regimens on costs and quality-adjusted life-years (QALYs) discounted at 5% per annum. At a willingness-to-pay (WTP) threshold of greater than $30,666 per QALY, FOLFIRINOX would be the most optimal regimen. For a WTP threshold of $50,000 per QALY, the probability that FOLFIRINOX would be optimal was 57.8%. There was no price reduction for nab-paclitaxel when GnP was optimal. From a Canadian public health payer's perspective at the present time and drug prices, FOLFIRINOX is the optimal regimen on the basis of the cost-effectiveness criterion. GnP is not cost-effective regardless of the WTP threshold. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
NASA Astrophysics Data System (ADS)
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2012-09-01
The global rise in energy demands brings major obstacles to many energy organizations in providing adequate energy supply. Hence, many techniques to generate cost effective, reliable and environmentally friendly alternative energy source are being explored. One such method is the integration of photovoltaic cells, wind turbine generators and fuel-based generators, included with storage batteries. This sort of power systems are known as distributed generation (DG) power system. However, the application of DG power systems raise certain issues such as cost effectiveness, environmental impact and reliability. The modelling as well as the optimization of this DG power system was successfully performed in the previous work using Particle Swarm Optimization (PSO). The central idea of that work was to minimize cost, minimize emissions and maximize reliability (multi-objective (MO) setting) with respect to the power balance and design requirements. In this work, we introduce a fuzzy model that takes into account the uncertain nature of certain variables in the DG system which are dependent on the weather conditions (such as; the insolation and wind speed profiles). The MO optimization in a fuzzy environment was performed by applying the Hopfield Recurrent Neural Network (HNN). Analysis on the optimized results was then carried out.
Method for Household Refrigerators Efficiency Increasing
NASA Astrophysics Data System (ADS)
Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.
2017-11-01
The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.
Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin; Cheng, Runwei
Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.
Jain, Vivek; Chang, Wei; Byonanebye, Dathan M.; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R.; Havlir, Diane V.; Kahn, James G.
2015-01-01
Background Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Methods Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Findings Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451–716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100–200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. Conclusions In a Ugandan HIV clinic, ART delivery costs—including VL testing—for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions. PMID:26632823
NASA Technical Reports Server (NTRS)
Sherry, Lance; Ferguson, John; Hoffman, Karla; Donohue, George; Beradino, Frank
2012-01-01
This report describes the Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM) that is designed to provide insights into airline decision-making with regards to markets served, schedule of flights on these markets, the type of aircraft assigned to each scheduled flight, load factors, airfares, and airline profits. The main inputs to the model are hedged fuel prices, airport capacity limits, and candidate markets. Embedded in the model are aircraft performance and associated cost factors, and willingness-to-pay (i.e. demand vs. airfare curves). Case studies demonstrate the application of the model for analysis of the effects of increased capacity and changes in operating costs (e.g. fuel prices). Although there are differences between airports (due to differences in the magnitude of travel demand and sensitivity to airfare), the system is more sensitive to changes in fuel prices than capacity. Further, the benefits of modernization in the form of increased capacity could be undermined by increases in hedged fuel prices
Optimal Portfolio Selection Under Concave Price Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn
2013-06-15
In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less
Optimality of profit-including prices under ideal planning.
Samuelson, P A
1973-07-01
Although prices calculated by a constant percentage markup on all costs (nonlabor as well as direct-labor) are usually admitted to be more realistic for a competitive capitalistic model, the view is often expressed that, for optimal planning purposes, the "values" model of Marx's Capital, Volume I, is to be preferred. It is shown here that an optimal-control model that maximizes discounted social utility of consumption per capita and that ultimately approaches a steady state must ultimately have optimal pricing that involves equal rates of steady-state profit in all industries; and such optimal pricing will necessarily deviate from Marx's model of equal rates of surplus value (markups on direct-labor only) in all industries.
Optimality of Profit-Including Prices Under Ideal Planning
Samuelson, Paul A.
1973-01-01
Although prices calculated by a constant percentage markup on all costs (nonlabor as well as direct-labor) are usually admitted to be more realistic for a competitive capitalistic model, the view is often expressed that, for optimal planning purposes, the “values” model of Marx's Capital, Volume I, is to be preferred. It is shown here that an optimal-control model that maximizes discounted social utility of consumption per capita and that ultimately approaches a steady state must ultimately have optimal pricing that involves equal rates of steady-state profit in all industries; and such optimal pricing will necessarily deviate from Marx's model of equal rates of surplus value (markups on direct-labor only) in all industries. PMID:16592102
Optimization Under Uncertainty of Site-Specific Turbine Configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, J.; Dykes, K.; Graf, P.
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less
Optimization under Uncertainty of Site-Specific Turbine Configurations: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, Julian; Dykes, Katherine; Graf, Peter
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained withmore » increasing risk aversion on the part of the designer.« less
Optimization Under Uncertainty of Site-Specific Turbine Configurations
Quick, J.; Dykes, K.; Graf, P.; ...
2016-10-03
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less
cost and benefits optimization model for fault-tolerant aircraft electronic systems
NASA Technical Reports Server (NTRS)
1983-01-01
The factors involved in economic assessment of fault tolerant systems (FTS) and fault tolerant flight control systems (FTFCS) are discussed. Algorithms for optimization and economic analysis of FTFCS are documented.
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
NASA Astrophysics Data System (ADS)
Kumar, Anuj; Srivastava, Prashant K.
2017-03-01
In this work, an optimal control problem with vaccination and treatment as control policies is proposed and analysed for an SVIR model. We choose vaccination and treatment as control policies because both these interventions have their own practical advantage and ease in implementation. Also, they are widely applied to control or curtail a disease. The corresponding total cost incurred is considered as weighted combination of costs because of opportunity loss due to infected individuals and costs incurred in providing vaccination and treatment. The existence of optimal control paths for the problem is established and guaranteed. Further, these optimal paths are obtained analytically using Pontryagin's Maximum Principle. We analyse our results numerically to compare three important strategies of proposed controls, viz.: vaccination only; with both treatment and vaccination; and treatment only. We note that first strategy (vaccination only) is less effective as well as expensive. Though, for a highly effective vaccine, vaccination alone may also work well in comparison with treatment only strategy. Among all the strategies, we observe that implementation of both treatment and vaccination is most effective and less expensive. Moreover, in this case the infective population is found to be relatively very low. Thus, we conclude that the comprehensive effect of vaccination and treatment not only minimizes cost burden due to opportunity loss and applied control policies but also keeps a tab on infective population.
Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan
2017-05-01
In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability. Copyright © 2017 Elsevier B.V. All rights reserved.
Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad
2018-02-01
The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Policy for equipment’s leasing period extension with minimum cost of maintenance
NASA Astrophysics Data System (ADS)
Lestari, C.; Kurniati, N.
2018-04-01
The cost structure for equipment investment including purchase cost and maintenance cost is getting more expensive. The company considers to lease the equipment instead of purchase it under a contractual agreement. Offering to extend the lease period, following to the base lease period, will provide more benefits for both the lessor (owner) and the lessee (user). Whenever the lease period extension offered at the beginning of the contract, there are some risks in finance e.g. uncertainty of the equipment performance and lessor responsibility. Therefore, this research attempts to model the optimal maintenance policy for lease period extension offered at the end of the contract. Minimal repair is performed to rectify a failed equipment, while imperfect preventive maintenance is conducted to improve the operational state of the equipment when reaches a certain control limit to avoid failures. The mathematical model is constructed to determine the optimal control limit, the number and degree of preventive maintenance, and the multiplication number of the lease period extension. Finally, numerical examples are given to illustrate the influences of the optimal length of the extended lease and the maintenance policy to minimize the maintenance cost.
NASA Astrophysics Data System (ADS)
Arfawi Kurdhi, Nughthoh; Adi Diwiryo, Toray; Sutanto
2016-02-01
This paper presents an integrated single-vendor two-buyer production-inventory model with stochastic demand and service level constraints. Shortage is permitted in the model, and partial backordered partial lost sale. The lead time demand is assumed follows a normal distribution and the lead time can be reduced by adding crashing cost. The lead time and ordering cost reductions are interdependent with logaritmic function relationship. A service level constraint policy corresponding to each buyer is considered in the model in order to limit the level of inventory shortages. The purpose of this research is to minimize joint total cost inventory model by finding the optimal order quantity, safety stock, lead time, and the number of lots delivered in one production run. The optimal production-inventory policy gained by the Lagrange method is shaped to account for the service level restrictions. Finally, a numerical example and effects of the key parameters are performed to illustrate the results of the proposed model.
Development of a Methodology to Optimally Allocate Visual Inspection Time
1989-06-01
Model and then takes into account the costs of the errors. The purpose of the Alternative Model is to not make 104 costly mistakes while meeting the...James Buck, and Virgil Anderson, AIIE Transactions, Volume 11, No.4, December 1979. 26. "Inspection of Sheet Materials - Model and Data", Colin G. Drury ...worker error, the probability of inspector error, and the cost of system error. Paired comparisons of error phenomena from operational personnel are
Design optimization of aircraft landing gear assembly under dynamic loading
NASA Astrophysics Data System (ADS)
Wong, Jonathan Y. B.
As development cycles and prototyping iterations begin to decrease in the aerospace industry, it is important to develop and improve practical methodologies to meet all design metrics. This research presents an efficient methodology that applies high-fidelity multi-disciplinary design optimization techniques to commercial landing gear assemblies, for weight reduction, cost savings, and structural performance dynamic loading. Specifically, a slave link subassembly was selected as the candidate to explore the feasibility of this methodology. The design optimization process utilized in this research was sectioned into three main stages: setup, optimization, and redesign. The first stage involved the creation and characterization of the models used throughout this research. The slave link assembly was modelled with a simplified landing gear test, replicating the behavior of the physical system. Through extensive review of the literature and collaboration with Safran Landing Systems, dynamic and structural behavior for the system were characterized and defined mathematically. Once defined, the characterized behaviors for the slave link assembly were then used to conduct a Multi-Body Dynamic (MBD) analysis to determine the dynamic and structural response of the system. These responses were then utilized in a topology optimization through the use of the Equivalent Static Load Method (ESLM). The results of the optimization were interpreted and later used to generate improved designs in terms of weight, cost, and structural performance under dynamic loading in stage three. The optimized designs were then validated using the model created for the MBD analysis of the baseline design. The design generation process employed two different approaches for post-processing the topology results produced. The first approach implemented a close replication of the topology results, resulting in a design with an overall peak stress increase of 74%, weight savings of 67%, and no apparent cost savings due to complex features present in the design. The second design approach focused on realizing reciprocating benefits for cost and weight savings. As a result, this design was able to achieve an overall peak stress increase of 6%, weight and cost savings of 36%, and 60%, respectively.
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Karaghouli, Ali; Kazmerski, L.L.
2010-04-15
This paper addresses the need for electricity of rural areas in southern Iraq and proposes a photovoltaic (PV) solar system to power a health clinic in that region. The total daily health clinic load is 31.6 kW h and detailed loads are listed. The National Renewable Energy Laboratory (NREL) optimization computer model for distributed power, ''HOMER,'' is used to estimate the system size and its life-cycle cost. The analysis shows that the optimal system's initial cost, net present cost, and electricity cost is US$ 50,700, US$ 60,375, and US$ 0.238/kW h, respectively. These values for the PV system are comparedmore » with those of a generator alone used to supply the load. We found that the initial cost, net present cost of the generator system, and electricity cost are US$ 4500, US$ 352,303, and US$ 1.332/kW h, respectively. We conclude that using the PV system is justified on humanitarian, technical, and economic grounds. (author)« less
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal
NASA Astrophysics Data System (ADS)
Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi
2017-06-01
Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.
Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett
2004-01-01
Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.
NASA Astrophysics Data System (ADS)
Hamdi, Hadiwardoyo, Sigit P.; Correia, A. Gomes; Pereira, Paulo
2017-06-01
A road network requires timely maintenance to keep the road surface in good condition onward better services to improve accessibility and mobility. Strategies and maintenance techniques must be chosen in order to maximize road service level through cost-effective interventions. This approach requires an updated database, which the road network in Indonesia is supported by a manual and visual survey, also using NAASRA profiler. Furthermore, in this paper, the deterministic model of deterioration was used. This optimization model uses life cycle cost analysis (LCCA), applied in an integrated manner, using IRI indicator, and allows determining the priority of treatment, type of treatment and its relation to the cost. The purpose of this paper was focussed on the aspects of road maintenance management, i.e., maintenance optimization models for different levels of traffic and various initial of road distress conditions on the national road network in Indonesia. The implementation of Integrated Road Management System (IRMS) can provide a solution to the problem of cost constraints in the maintenance of the national road network. The results from this study found that as the lowest as agency cost, it will affect the increasing of user cost. With the achievement of the target plan scenario Pl000 with initial value IRI 2, it was found that the routine management throughout the year and in early reconstruction and periodic maintenance with a 30 mm thick overlay, will simultaneously provide a higher net benefit value and has the lowest total cost of transportation.
NASA Astrophysics Data System (ADS)
Sutrisno, Widowati, Tjahjana, R. Heru
2017-12-01
The future cost in many industrial problem is obviously uncertain. Then a mathematical analysis for a problem with uncertain cost is needed. In this article, we deals with the fuzzy expected value analysis to solve an integrated supplier selection and supplier selection problem with uncertain cost where the costs uncertainty is approached by a fuzzy variable. We formulate the mathematical model of the problems fuzzy expected value based quadratic optimization with total cost objective function and solve it by using expected value based fuzzy programming. From the numerical examples result performed by the authors, the supplier selection problem was solved i.e. the optimal supplier was selected for each time period where the optimal product volume of all product that should be purchased from each supplier for each time period was determined and the product stock level was controlled as decided by the authors i.e. it was followed the given reference level.
A Goal Programming Optimization Model for The Allocation of Liquid Steel Production
NASA Astrophysics Data System (ADS)
Hapsari, S. N.; Rosyidi, C. N.
2018-03-01
This research was conducted in one of the largest steel companies in Indonesia which has several production units and produces a wide range of steel products. One of the important products in the company is billet steel. The company has four Electric Arc Furnace (EAF) which produces liquid steel which must be procesed further to be billet steel. The billet steel plant needs to make their production process more efficient to increase the productvity. The management has four goals to be achieved and hence the optimal allocation of the liquid steel production is needed to achieve those goals. In this paper, a goal programming optimization model is developed to determine optimal allocation of liquid steel production in each EAF, to satisfy demand in 3 periods and the company goals, namely maximizing the volume of production, minimizing the cost of raw materials, minimizing maintenance costs, maximizing sales revenues, and maximizing production capacity. From the results of optimization, only maximizing production capacity goal can not achieve the target. However, the model developed in this papare can optimally allocate liquid steel so the allocation of production does not exceed the maximum capacity of the machine work hours and maximum production capacity.
NASA Astrophysics Data System (ADS)
Li, You-Rong; Du, Mei-Tang; Wang, Jian-Ning
2012-12-01
This paper focuses on the research of an evaporator with a binary mixture of organic working fluids in the organic Rankine cycle. Exergoeconomic analysis and performance optimization were performed based on the first and second laws of thermodynamics, and the exergoeconomic theory. The annual total cost per unit heat transfer rate was introduced as the objective function. In this model, the exergy loss cost caused by the heat transfer irreversibility and the capital cost were taken into account; however, the exergy loss due to the frictional pressure drops, heat dissipation to surroundings, and the flow imbalance were neglected. The variation laws of the annual total cost with respect to the number of transfer units and the temperature ratios were presented. Optimal design parameters that minimize the objective function had been obtained, and the effects of some important dimensionless parameters on the optimal performances had also been discussed for three types of evaporator flow arrangements. In addition, optimal design parameters of evaporators were compared with those of condensers.
The extension of the thermal-vacuum test optimization program to multiple flights
NASA Technical Reports Server (NTRS)
Williams, R. E.; Byrd, J.
1981-01-01
The thermal vacuum test optimization model developed to provide an approach to the optimization of a test program based on prediction of flight performance with a single flight option in mind is extended to consider reflight as in space shuttle missions. The concept of 'utility', developed under the name of 'availability', is used to follow performance through the various options encountered when the capabilities of reflight and retrievability of space shuttle are available. Also, a 'lost value' model is modified to produce a measure of the probability of a mission's success, achieving a desired utility using a minimal cost test strategy. The resulting matrix of probabilities and their associated costs provides a means for project management to evaluate various test and reflight strategies.
Gain optimization with non-linear controls
NASA Technical Reports Server (NTRS)
Slater, G. L.; Kandadai, R. D.
1984-01-01
An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.
Assessing cost-effectiveness of specific LID practice designs in response to large storm events
NASA Astrophysics Data System (ADS)
Chui, Ting Fong May; Liu, Xin; Zhan, Wenting
2016-02-01
Low impact development (LID) practices have become more important in urban stormwater management worldwide. However, most research on design optimization focuses on relatively large scale, and there is very limited information or guideline regarding individual LID practice designs (i.e., optimal depth, width and length). The objective of this study is to identify the optimal design by assessing the hydrological performance and the cost-effectiveness of different designs of LID practices at a household or business scale, and to analyze the sensitivity of the hydrological performance and the cost of the optimal design to different model and design parameters. First, EPA SWMM, automatically controlled by MATLAB, is used to obtain the peak runoff of different designs of three specific LID practices (i.e., green roof, bioretention and porous pavement) under different design storms (i.e., 2 yr and 50 yr design storms of Hong Kong, China and Seattle, U.S.). Then, life cycle cost is estimated for the different designs, and the optimal design, defined as the design with the lowest cost and at least 20% peak runoff reduction, is identified. Finally, sensitivity of the optimal design to the different design parameters is examined. The optimal design of green roof tends to be larger in area but thinner, while the optimal designs of bioretention and porous pavement tend to be smaller in area. To handle larger storms, however, it is more effective to increase the green roof depth, and to increase the area of the bioretention and porous pavement. Porous pavement is the most cost-effective for peak flow reduction, followed by bioretention and then green roof. The cost-effectiveness, measured as the peak runoff reduction/thousand Dollars of LID practices in Hong Kong (e.g., 0.02 L/103 US s, 0.15 L/103 US s and 0.93 L/103 US s for green roof, bioretention and porous pavement for 2 yr storm) is lower than that in Seattle (e.g., 0.03 L/103 US s, 0.29 L/103 US s and 1.58 L/103 US s for green roof, bioretention and porous pavement for 2 yr storm). The optimal designs are influenced by the model and design parameters (i.e., initial saturation, hydraulic conductivity and berm height). However, it overall does not affect the main trends and key insights derived, and the results are therefore generic and relevant to the household/business-scale optimal design of LID practices worldwide.
Sizing a rainwater harvesting cistern by minimizing costs
NASA Astrophysics Data System (ADS)
Pelak, Norman; Porporato, Amilcare
2016-10-01
Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.
Optimization strategies for sediment reduction practices on roads in steep, forested terrain
Madej, Mary Ann; Eschenbach, E.A.; Diaz, C.; Teasley, R.; Baker, K.
2006-01-01
Many forested steeplands in the western United States display a legacy of disturbances due to timber harvest, mining or wildfires, for example. Such disturbances have caused accelerated hillslope erosion, leading to increased sedimentation in fish-bearing streams. Several restoration techniques have been implemented to address these problems in mountain catchments, many of which involve the removal of abandoned roads and re-establishing drainage networks across road prisms. With limited restoration funds to be applied across large catchments, land managers are faced with deciding which areas and problems should be treated first, and by which technique, in order to design the most effective and cost-effective sediment reduction strategy. Currently most restoration is conducted on a site-specific scale according to uniform treatment policies. To create catchment-scale policies for restoration, we developed two optimization models - dynamic programming and genetic algorithms - to determine the most cost-effective treatment level for roads and stream crossings in a pilot study basin with approximately 700 road segments and crossings. These models considered the trade-offs between the cost and effectiveness of different restoration strategies to minimize the predicted erosion from all forest roads within a catchment, while meeting a specified budget constraint. The optimal sediment reduction strategies developed by these models performed much better than two strategies of uniform erosion control which are commonly applied to road erosion problems by land managers, with sediment savings increased by an additional 48 to 80 per cent. These optimization models can be used to formulate the most cost-effective restoration policy for sediment reduction on a catchment scale. Thus, cost savings can be applied to further restoration work within the catchment. Nevertheless, the models are based on erosion rates measured on past restoration sites, and need to be up-dated as additional monitoring studies evaluate long-term basin response to erosion control treatments. Copyright ?? 2006 John Wiley & Sons, Ltd.
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
Optimization of Location–Routing Problem for Cold Chain Logistics Considering Carbon Footprint
Wang, Songyi; Tao, Fengming; Shi, Yuhe
2018-01-01
In order to solve the optimization problem of logistics distribution system for fresh food, this paper provides a low-carbon and environmental protection point of view, based on the characteristics of perishable products, and combines with the overall optimization idea of cold chain logistics distribution network, where the green and low-carbon location–routing problem (LRP) model in cold chain logistics is developed with the minimum total costs as the objective function, which includes carbon emission costs. A hybrid genetic algorithm with heuristic rules is designed to solve the model, and an example is used to verify the effectiveness of the algorithm. Furthermore, the simulation results obtained by a practical numerical example show the applicability of the model while provide green and environmentally friendly location-distribution schemes for the cold chain logistics enterprise. Finally, carbon tax policies are introduced to analyze the impact of carbon tax on the total costs and carbon emissions, which proves that carbon tax policy can effectively reduce carbon dioxide emissions in cold chain logistics network. PMID:29316639
Advanced Energy Storage Management in Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ceylan, Oguzhan; Xiao, Bailu
2016-01-01
With increasing penetration of distributed generation (DG) in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative mixed integer quadratic constrained quadratic programming model to optimize the operation of a three phase unbalanced distribution system with high penetration of Photovoltaic (PV) panels, DG and energy storage (ES) is developed. The proposed model minimizes not only the operating cost, including fuel cost and purchasing cost, but also voltage deviations and power loss. The optimization model is based on the linearized sensitivity coefficients between state variables (e.g., node voltages) andmore » control variables (e.g., real and reactive power injections of DG and ES). To avoid slow convergence when close to the optimum, a golden search method is introduced to control the step size and accelerate the convergence. The proposed algorithm is demonstrated on modified IEEE 13 nodes test feeders with multiple PV panels, DG and ES. Numerical simulation results validate the proposed algorithm. Various scenarios of system configuration are studied and some critical findings are concluded.« less
Statistical Optimality in Multipartite Ranking and Ordinal Regression.
Uematsu, Kazuki; Lee, Yoonkyung
2015-05-01
Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.
Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup
Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.
2010-01-01
Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651
A hydroeconomic modeling framework for optimal integrated management of forest and water
NASA Astrophysics Data System (ADS)
Garcia-Prats, Alberto; del Campo, Antonio D.; Pulido-Velazquez, Manuel
2016-10-01
Forests play a determinant role in the hydrologic cycle, with water being the most important ecosystem service they provide in semiarid regions. However, this contribution is usually neither quantified nor explicitly valued. The aim of this study is to develop a novel hydroeconomic modeling framework for assessing and designing the optimal integrated forest and water management for forested catchments. The optimization model explicitly integrates changes in water yield in the stands (increase in groundwater recharge) induced by forest management and the value of the additional water provided to the system. The model determines the optimal schedule of silvicultural interventions in the stands of the catchment in order to maximize the total net benefit in the system. Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs according to stand density and canopy cover were modeled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. In order to illustrate the presented modeling framework, a case study was carried out in a planted pine forest (Pinus halepensis Mill.) located in south-western Valencia province (Spain). The optimized scenario increased groundwater recharge. This novel modeling framework can be used in the design of a "payment for environmental services" scheme in which water beneficiaries could contribute to fund and promote efficient forest management operations.
Quantifying the costs and benefits of privacy-preserving health data publishing.
Khokhar, Rashid Hussain; Chen, Rui; Fung, Benjamin C M; Lui, Siu Man
2014-08-01
Cost-benefit analysis is a prerequisite for making good business decisions. In the business environment, companies intend to make profit from maximizing information utility of published data while having an obligation to protect individual privacy. In this paper, we quantify the trade-off between privacy and data utility in health data publishing in terms of monetary value. We propose an analytical cost model that can help health information custodians (HICs) make better decisions about sharing person-specific health data with other parties. We examine relevant cost factors associated with the value of anonymized data and the possible damage cost due to potential privacy breaches. Our model guides an HIC to find the optimal value of publishing health data and could be utilized for both perturbative and non-perturbative anonymization techniques. We show that our approach can identify the optimal value for different privacy models, including K-anonymity, LKC-privacy, and ∊-differential privacy, under various anonymization algorithms and privacy parameters through extensive experiments on real-life data. Copyright © 2014 Elsevier Inc. All rights reserved.
Urban water infrastructure optimization to reduce environmental impacts and costs.
Lim, Seong-Rin; Suh, Sangwon; Kim, Jung-Hoon; Park, Hung Suck
2010-01-01
Urban water planning and policy have been focusing on environmentally benign and economically viable water management. The objective of this study is to develop a mathematical model to integrate and optimize urban water infrastructures for supply-side planning and policy: freshwater resources and treated wastewater are allocated to various water demand categories in order to reduce contaminants in the influents supplied for drinking water, and to reduce consumption of the water resources imported from the regions beyond a city boundary. A case study is performed to validate the proposed model. An optimal urban water system of a metropolitan city is calculated on the basis of the model and compared to the existing water system. The integration and optimization decrease (i) average concentrations of the influents supplied for drinking water, which can improve human health and hygiene; (ii) total consumption of water resources, as well as electricity, reducing overall environmental impacts; (iii) life cycle cost; and (iv) water resource dependency on other regions, improving regional water security. This model contributes to sustainable urban water planning and policy. 2009 Elsevier Ltd. All rights reserved.
Wu, Fei; Sioshansi, Ramteen
2017-05-25
Electric vehicles (EVs) hold promise to improve the energy efficiency and environmental impacts of transportation. However, widespread EV use can impose significant stress on electricity-distribution systems due to their added charging loads. This paper proposes a centralized EV charging-control model, which schedules the charging of EVs that have flexibility. This flexibility stems from EVs that are parked at the charging station for a longer duration of time than is needed to fully recharge the battery. The model is formulated as a two-stage stochastic optimization problem. The model captures the use of distributed energy resources and uncertainties around EV arrival timesmore » and charging demands upon arrival, non-EV loads on the distribution system, energy prices, and availability of energy from the distributed energy resources. We use a Monte Carlo-based sample-average approximation technique and an L-shaped method to solve the resulting optimization problem efficiently. We also apply a sequential sampling technique to dynamically determine the optimal size of the randomly sampled scenario tree to give a solution with a desired quality at minimal computational cost. Here, we demonstrate the use of our model on a Central-Ohio-based case study. We show the benefits of the model in reducing charging costs, negative impacts on the distribution system, and unserved EV-charging demand compared to simpler heuristics. Lastly, we also conduct sensitivity analyses, to show how the model performs and the resulting costs and load profiles when the design of the station or EV-usage parameters are changed.« less
FACTS Devices Cost Recovery During Congestion Management in Deregulated Electricity Markets
NASA Astrophysics Data System (ADS)
Sharma, Ashwani Kumar; Mittapalli, Ram Kumar; Pal, Yash
2016-09-01
In future electricity markets, flexible alternating current transmission system (FACTS) devices will play key role for providing ancillary services. Since huge cost is involved for the FACTS devices placement in the power system, the cost invested has to be recovered in their life time for the replacement of these devices. The FACTS devices in future electricity markets can act as an ancillary services provider and have to be remunerated. The main contributions of the paper are: (1) investment recovery of FACTS devices during congestion management such as static VAR compensator and unified power flow controller along with thyristor controlled series compensator using non-linear bid curves, (2) the impact of ZIP load model on the FACTS cost recovery of the devices, (3) the comparison of results obtained without ZIP load model for both pool and hybrid market model, (4) secure bilateral transactions incorporation in hybrid market model. An optimal power flow based approach has been developed for maximizing social welfare including FACTS devices cost. The optimal placement of the FACTS devices have been obtained based on maximum social welfare. The results have been obtained for both pool and hybrid electricity market for IEEE 24-bus RTS.
Optimizing model: insemination, replacement, seasonal production, and cash flow.
DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A
1992-03-01
Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.
An effective rumor-containing strategy
NASA Astrophysics Data System (ADS)
Pan, Cheng; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan
2018-06-01
False rumors can lead to huge economic losses or/and social instability. Hence, mitigating the impact of bogus rumors is of primary importance. This paper focuses on the problem of how to suppress a false rumor by use of the truth. Based on a set of rational hypotheses and a novel rumor-truth mixed spreading model, the effectiveness and cost of a rumor-containing strategy are quantified, respectively. On this basis, the original problem is modeled as a constrained optimization problem (the RC model), in which the independent variable and the objective function represent a rumor-containing strategy and the effectiveness of a rumor-containing strategy, respectively. The goal of the optimization problem is to find the most effective rumor-containing strategy subject to a limited rumor-containing budget. Some optimal rumor-containing strategies are given by solving their respective RC models. The influence of different factors on the highest cost effectiveness of a RC model is illuminated through computer experiments. The results obtained are instructive to develop effective rumor-containing strategies.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer. PMID:28103246
Mantzaras, Gerasimos; Voudrias, Evangelos A
2017-11-01
The objective of this work was to develop an optimization model to minimize the cost of a collection, haul, transfer, treatment and disposal system for infectious medical waste (IMW). The model calculates the optimum locations of the treatment facilities and transfer stations, their design capacities (t/d), the number and capacities of all waste collection, transport and transfer vehicles and their optimum transport path and the minimum IMW management system cost. Waste production nodes (hospitals, healthcare centers, peripheral health offices, private clinics and physicians in private practice) and their IMW production rates were specified and used as model inputs. The candidate locations of the treatment facilities, transfer stations and sanitary landfills were designated, using a GIS-based methodology. Specifically, Mapinfo software with exclusion criteria for non-appropriate areas was used for siting candidate locations for the construction of the treatment plant and calculating the distance and travel time of all possible vehicle routes. The objective function was a non-linear equation, which minimized the total collection, transport, treatment and disposal cost. Total cost comprised capital and operation costs for: (1) treatment plant, (2) waste transfer stations, (3) waste transport and transfer vehicles and (4) waste collection bins and hospital boxes. Binary variables were used to decide whether a treatment plant and/or a transfer station should be constructed and whether a collection route between two or more nodes should be followed. Microsoft excel software was used as installation platform of the optimization model. For the execution of the optimization routine, two completely different software were used and the results were compared, thus, resulting in higher reliability and validity of the results. The first software was Evolver, which is based on the use of genetic algorithms. The second one was Crystal Ball, which is based on Monte Carlo simulation. The model was applied to the Region of East Macedonia - Thrace in Greece. The optimum solution resulted in one treatment plant located in the sanitary landfill area of Chrysoupolis, required no transfer stations and had a total management cost of 38,800 €/month or 809 €/t. If a treatment plant is sited in the most eastern part of the Region, i.e., the industrial area of Alexandroupolis, the optimum solution would result in a transfer station of 23 m 3 , located near Kavala General Hospital, and a total cost of 39,800 €/month or 831 €/t. A sensitivity analysis was conducted and two alternative scenarios were optimized. In the first scenario, a 15% rise in fuel cost and in the second scenario a 25% rise in IMW production were considered. At the end, a cost calculation in €/t/km for every type of vehicle used for haul and transfer was conducted. Also, the cost of the whole system was itemized and calculated in €/t/km and €/t. The results showed that the higher percentage of the total cost was due to the construction of the treatment plant. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Raei, Ehsan; Nikoo, Mohammad Reza; Pourshahabi, Shokoufeh
2017-08-01
In the present study, a BIOPLUME III simulation model is coupled with a non-dominating sorting genetic algorithm (NSGA-II)-based model for optimal design of in situ groundwater bioremediation system, considering preferences of stakeholders. Ministry of Energy (MOE), Department of Environment (DOE), and National Disaster Management Organization (NDMO) are three stakeholders in the groundwater bioremediation problem in Iran. Based on the preferences of these stakeholders, the multi-objective optimization model tries to minimize: (1) cost; (2) sum of contaminant concentrations that violate standard; (3) contaminant plume fragmentation. The NSGA-II multi-objective optimization method gives Pareto-optimal solutions. A compromised solution is determined using fallback bargaining with impasse to achieve a consensus among the stakeholders. In this study, two different approaches are investigated and compared based on two different domains for locations of injection and extraction wells. At the first approach, a limited number of predefined locations is considered according to previous similar studies. At the second approach, all possible points in study area are investigated to find optimal locations, arrangement, and flow rate of injection and extraction wells. Involvement of the stakeholders, investigating all possible points instead of a limited number of locations for wells, and minimizing the contaminant plume fragmentation during bioremediation are new innovations in this research. Besides, the simulation period is divided into smaller time intervals for more efficient optimization. Image processing toolbox in MATLAB® software is utilized for calculation of the third objective function. In comparison with previous studies, cost is reduced using the proposed methodology. Dispersion of the contaminant plume is reduced in both presented approaches using the third objective function. Considering all possible points in the study area for determining the optimal locations of the wells in the second approach leads to more desirable results, i.e. decreasing the contaminant concentrations to a standard level and 20% to 40% cost reduction.
Conception et analyse d'un systeme d'optimisation de plans de vol pour les avions
NASA Astrophysics Data System (ADS)
Maazoun, Wissem
The main objective of this thesis is to develop an optimization method for the preparation of flight plans for aircrafts. The flight plan minimizes all costs associated with the flight. We determine an optimal path for an airplane from a departure airport to a destination airport. The optimal path minimizes the sum of all costs, i.e. the cost of fuel added to the cost of time (wages, rental of the aircraft, arrival delays, etc.). The optimal trajectory is obtained by considering all possible trajectories on a 3D graph (longitude, latitude and altitude) where the altitude levels are separated by 2,000 feet, and by applying a shortest path algorithm. The main task was to accurately compute fuel consumption on each edge of the graph, making sure that each arc has a minimal cost and is covered in a realistic way from the point of view of control, i.e. in accordance with the rules of navigation. To compute the cost of an arc, we take into account weather conditions (temperature, pressure, wind components, etc.). The optimization of each arc is done via the evaluation of an optimum speed that takes all costs into account. Each arc of the graph typically includes several sub-phases of the flight, e.g. altitude change, speed change, and constant speed and altitude. In the initial climb and the final descent phases, the costs are determined by considering altitude changes at constant CAS (Calibrated Air Speed) or constant Mach number. CAS and Mach number are adjusted to minimize cost. The aerodynamic model used is the one proposed by Eurocontrol, which uses the BADA (Base of Aircraft Data) tables. This model is based on the total energy equation that determines the instantaneous fuel consumption. Calculations on each arc are done by solving a system of differential equations that systematically takes all costs into account. To compute the cost of an arc, we must know the time to go through it, which is generally unknown. To have well-posed boundary conditions, we use the horizontal displacement as the independent variable of the system of differential equations. We consider the velocity components of the wind in a 3D system of coordinates to compute the instantaneous ground speed of the aircraft. To consider the cost of time, we use the cost index. The cost of an arc depends on the aircraft mass at the beginning of this arc, and this mass depends on the path. As we consider all possible paths, the cost of an arc must be computed for each trajectory to which it belongs. For a long-distance flight, the number of arcs to be considered in the graph is large and therefore the cost of an arc is typically computed many times. Our algorithm computes the costs of one million arcs in seconds while having a high accuracy. The determination of the optimal trajectory can therefore be done in a short time. To get the optimal path, the mass of the aircraft at the departure point must also be optimal. It is therefore necessary to know the optimal amount of fuel for the journey. The aircraft mass is known only at the arrival point. This mass is the mass of the aircraft including passengers, cargo and reserve fuel mass. The optimal path is determined by calculating backwards, i.e. from the arrival point to the departure point. For the determination of the optimal trajectory, we use an elliptical grid that has focal points at the departure and arrival points. The use of this grid is essential for the construction of a direct and acyclic graph. We use the Bellman-Ford algorithm on a DAG to determine the shortest path. This algorithm is easy to implement and results in short computation times. Our algorithm computes an optimal trajectory with an optimal cost for each arc. Altitude changes are done optimally with respect to the mass of the aircraft and the cost of time. Our algorithm gives the mass, speed, altitude and total cost at any point of the trajectory as well as the optimal profiles of climb and descent. A prototype has been implemented in C. We made simulations of all types of possible arcs and of several complete trajectories to illustrate the behaviour of the algorithm.
Discrete homotopy analysis for optimal trading execution with nonlinear transient market impact
NASA Astrophysics Data System (ADS)
Curato, Gianbiagio; Gatheral, Jim; Lillo, Fabrizio
2016-10-01
Optimal execution in financial markets is the problem of how to trade a large quantity of shares incrementally in time in order to minimize the expected cost. In this paper, we study the problem of the optimal execution in the presence of nonlinear transient market impact. Mathematically such problem is equivalent to solve a strongly nonlinear integral equation, which in our model is a weakly singular Urysohn equation of the first kind. We propose an approach based on Homotopy Analysis Method (HAM), whereby a well behaved initial trading strategy is continuously deformed to lower the expected execution cost. Specifically, we propose a discrete version of the HAM, i.e. the DHAM approach, in order to use the method when the integrals to compute have no closed form solution. We find that the optimal solution is front loaded for concave instantaneous impact even when the investor is risk neutral. More important we find that the expected cost of the DHAM strategy is significantly smaller than the cost of conventional strategies.
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
Paszyńska, A.; Paszyński, M.; Jopek, K.; ...
2015-01-01
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paszyńska, A.; Paszyński, M.; Jopek, K.
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods
NASA Astrophysics Data System (ADS)
Gong, W.; Duan, Q.; Huo, X.
2017-12-01
Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.
Integrated Arrival and Departure Schedule Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Xue, Min; Zelinski, Shannon
2014-01-01
In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.
NASA Astrophysics Data System (ADS)
McGarity, A. E.
2009-12-01
Recent progress has been made developing decision-support models for optimal deployment of best management practices (BMP’s) in an urban watershed to achieve water quality goals. One example is the high-level screening model StormWISE, developed by the author (McGarity, 2006) that uses linear and nonlinear programming to narrow the search for optimal solutions to certain land use categories and drainage zones. Another example is the model SUSTAIN developed by USEPA and Tetra Tech (Lai, et al., 2006), which builds on the work of Yu, et al., 2002), that uses a detailed, computationally intensive simulation model driven by a genetic solver to select optimal BMP sites. However, a model that deals only with best management practice (BMP) site selections may fail to consider solutions that avoid future nonpoint pollutant loadings by preserving undeveloped land. This paper presents results of a recently completed research project in which water resource engineers partnered with experienced professionals at a land conservation trust to develop a multiobjective model for watershed management. The result is a revised version of StormWISE that can be used to identify optimal, cost-effective combinations of easements and similar land preservation tools for undeveloped sites along with low impact development (LID) and BMP technologies for developed sites. The goal is to achieve the watershed-wide limits on runoff volume and pollutant loads that are necessary to meet water quality goals as well as ecological benefits associated with habitat preservation and enhancement. A nonlinear programming formulation is presented for the extended StormWISE model that achieves desired levels of environmental benefits at minimum cost. Tradeoffs between different environmental benefits are generated by multiple runs of the model while varying the levels of each environmental benefit obtained. The model is solved using piecewise linearization of environmental benefit functions where each linear segment of represents a different option for reducing stormwater runoff volumes and pollutant loadings. The solutions space is comprised of optimal levels of expenditure for categories of BMP's by land use category and optimal land preservation expenditures by drainage zone. To demonstrate the usefulness of the model, results from its application to the Little Crum Creek watershed in suburban Philadelphia are presented. The model has been used to assist a watershed association and four municipalities to develop an action plan for restoration of water quality on this impaired stream. References Lai, F., J. Zhen, J. Riverson, and L. Shoemaker (2006). "SUSTAIN - An Evaluation and Cost-Optimization Tool for Placement of BMPs," ASCE World Environmental and Water Resource Congress 2006. McGarity, A.E. (2006). A Cost Minimization Model to Priortize Urban Catchments for Stormwater BMP Implementation Projects. American Water Resources Association National Meeting, Baltimore, MD, November, 2006. Yu, S., J. X. Zhen, and S.Y. Zhai, (2002). Development of Stormwater Best Management Practice Placement Strategy for the Virginia Department of Transportation. Final Contract Report, VTRC 04-CR9, Virginia Transportation Research Council.
NASA Astrophysics Data System (ADS)
Girard, C.; Rinaudo, J. D.; Caballero, Y.; Pulido-Velazquez, M.
2012-04-01
This article presents a case study which illustrates how an integrated hydro-economic model can be applied to optimize a program of measures (PoM) at the river basin level. By allowing the integration of hydrological, environmental and economic aspects at a local scale, this model is indeed useful to assist water policy decision making processes. The model identifies the least cost PoM to satisfy the predicted 2030 urban and agricultural water demands while meeting the in-stream flow constraints. The PoM mainly consists of water saving and conservation measures at the different demands. It includes as well some measures mobilizing additional water resources coming from groundwater, inter-basin transfers and improvement in reservoir operating rules. The flow constraints are defined to ensure a good status of the surface water bodies, as defined by the EU Water Framework Directive (WFD). The case study is conducted in the Orb river basin, a coastal basin in Southern France. It faces a significant population growth, changes in agricultural patterns and limited water resources. It is classified at risk of not meeting the good status by 2015. Urban demand is calculated by type of water users at municipality level in 2006 and projected to 2030 with user specific scenarios. Agricultural water demand is estimated at irrigation district (canton) level in 2000 and projected to 2030 under three agricultural development scenarios. The total annual cost of each measure has been calculated taken into account operation and maintenance costs as well as investment cost. A first optimization model was developed using GAMS, General Algebraic Modeling System, applying Mixed Integer Linear Programming. The optimization is run to select the set of measures that minimizes the objective function, defined as the total cost of the applied measures, while meeting the demands and environmental constraints (minimum in-stream flows) for the 2030 time horizon. The first result is an optimized PoM on a drought year with a return period of five years, taken as a baseline scenario. A second step takes into account the impact of climate change on water demands and available resources. This allows decision makers to assess how the cost of the PoM evolves when the level of environmental constraints is increased or loosed, and so provides them a valuable input to understand the opportunity costs and trade-offs when defining environmental objectives for the long term, including also climate as a major factor of change. Finally, the model will be used on an extended hydrological time series to study costs and impacts of the PoM on the allocation of water resources. This will also allow the investigation of the uncertainties and the effect of risk aversion of decision makers and users on the system management, as well as the influence of the perfect foresight of deterministic optimization. ACKNOWLEDGEMENTS The study has been partially supported by the BRGM project Ouest-Hérault, the European Community 7th Framework Project GENESIS (n. 226536) on groundwater systems, and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (sub-projects CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
HOMER® Micropower Optimization Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lilienthal, P.
2005-01-01
NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.
Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L
2016-07-15
Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimal design of reverse osmosis module networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maskan, F.; Wiley, D.E.; Johnston, L.P.M.
2000-05-01
The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found thatmore » optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Venkat; Ho, Jonathan; Hobbs, Benjamin F.
2016-05-01
The recognition of transmission's interaction with other resources has motivated the development of co-optimization methods to optimize transmission investment while simultaneously considering tradeoffs with investments in electricity supply, demand, and storage resources. For a given set of constraints, co-optimized planning models provide solutions that have lower costs than solutions obtained from decoupled optimization (transmission-only, generation-only, or iterations between them). This paper describes co-optimization and provides an overview of approaches to co-optimizing transmission options, supply-side resources, demand-side resources, and natural gas pipelines. In particular, the paper provides an up-to-date assessment of the present and potential capabilities of existing co-optimization tools, andmore » it discusses needs and challenges for developing advanced co-optimization models.« less
Stochastic Optimization in The Power Management of Bottled Water Production Planning
NASA Astrophysics Data System (ADS)
Antoro, Budi; Nababan, Esther; Mawengkang, Herman
2018-01-01
This paper review a model developed to minimize production costs on bottled water production planning through stochastic optimization. As we know, that planning a management means to achieve the goal that have been applied, since each management level in the organization need a planning activities. The built models is a two-stage stochastic models that aims to minimize the cost on production of bottled water by observing that during the production process, neither interfernce nor vice versa occurs. The models were develop to minimaze production cost, assuming the availability of packing raw materials used considered to meet for each kind of bottles. The minimum cost for each kind production of bottled water are expressed in the expectation of each production with a scenario probability. The probability of uncertainly is a representation of the number of productions and the timing of power supply interruption. This is to ensure that the number of interruption that occur does not exceed the limit of the contract agreement that has been made by the company with power suppliers.
An effective model for ergonomic optimization applied to a new automotive assembly line
NASA Astrophysics Data System (ADS)
Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio
2016-06-01
An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.
Kessler, Jason; Myers, Julie E.; Nucifora, Kimberly A.; Mensah, Nana; Kowalski, Alexis; Sweeney, Monica; Toohey, Christopher; Khademi, Amin; Shepard, Colin; Cutler, Blayne; Braithwaite, R. Scott
2013-01-01
Background New York City (NYC) remains an epicenter of the HIV epidemic in the United States. Given the variety of evidence-based HIV prevention strategies available and the significant resources required to implement each of them, comparative studies are needed to identify how to maximize the number of HIV cases prevented most economically. Methods A new model of HIV disease transmission was developed integrating information from a previously validated micro-simulation HIV disease progression model. Specification and parameterization of the model and its inputs, including the intervention portfolio, intervention effects and costs were conducted through a collaborative process between the academic modeling team and the NYC Department of Health and Mental Hygiene. The model projects the impact of different prevention strategies, or portfolios of prevention strategies, on the HIV epidemic in NYC. Results Ten unique interventions were able to provide a prevention benefit at an annual program cost of less than $360,000, the threshold for consideration as a cost-saving intervention (because of offsets by future HIV treatment costs averted). An optimized portfolio of these specific interventions could result in up to a 34% reduction in new HIV infections over the next 20 years. The cost-per-infection averted of the portfolio was estimated to be $106,378; the total cost was in excess of $2 billion (over the 20 year period, or approximately $100 million per year, on average). The cost-savings of prevented infections was estimated at more than $5 billion (or approximately $250 million per year, on average). Conclusions Optimal implementation of a portfolio of evidence-based interventions can have a substantial, favorable impact on the ongoing HIV epidemic in NYC and provide future cost-saving despite significant initial costs. PMID:24058465
Optimal pricing policies for services with consideration of facility maintenance costs
NASA Astrophysics Data System (ADS)
Yeh, Ruey Huei; Lin, Yi-Fang
2012-06-01
For survival and success, pricing is an essential issue for service firms. This article deals with the pricing strategies for services with substantial facility maintenance costs. For this purpose, a mathematical framework that incorporates service demand and facility deterioration is proposed to address the problem. The facility and customers constitute a service system driven by Poisson arrivals and exponential service times. A service demand with increasing price elasticity and a facility lifetime with strictly increasing failure rate are also adopted in modelling. By examining the bidirectional relationship between customer demand and facility deterioration in the profit model, the pricing policies of the service are investigated. Then analytical conditions of customer demand and facility lifetime are derived to achieve a unique optimal pricing policy. The comparative statics properties of the optimal policy are also explored. Finally, numerical examples are presented to illustrate the effects of parameter variations on the optimal pricing policy.
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bhattacharjya, D.; Mukerji, T.; Mascarenhas, O.; Weyant, J.
2005-12-01
Designing a cost-effective and reliable monitoring program is crucial to the success of any geological CO2 storage project. Effective design entails determining both, the optimal measurement modality, as well as the frequency of monitoring the site. Time-lapse seismic provides the best spatial coverage and resolution for reservoir monitoring. Initial results from Sleipner (Norway) have demonstrated effective monitoring of CO2 plume movement. However, time-lapse seismic is an expensive monitoring technique especially over the long term life of a storage project and should be used judiciously. We present a mathematical model based on dynamic programming that can be used to estimate site-specific optimal frequency of time-lapse surveys. The dynamics of the CO2 sequestration process are simplified and modeled as a four state Markov process with transition probabilities. The states are M: injected CO2 safely migrating within the target zone; L: leakage from the target zone to the adjacent geosphere; R: safe migration after recovery from leakage state; and S: seepage from geosphere to the biosphere. The states are observed only when a monitoring survey is performed. We assume that the system may go to state S only from state L. We also assume that once observed to be in state L, remedial measures are always taken to bring it back to state R. Remediation benefits are captured by calculating the expected penalty if CO2 seeped into the biosphere. There is a trade-off between the conflicting objectives of minimum discounted costs of performing the next time-lapse survey and minimum risk of seepage and its associated costly consequences. A survey performed earlier would spot the leakage earlier. Remediation methods would have been utilized earlier, resulting in savings in costs attributed to excessive seepage. On the other hand, there are also costs for the survey and remedial measures. The problem is solved numerically using Bellman's optimality principal of dynamic programming to optimize over the entire finite time horizon. We use a Monte Carlo approach to explore trade-offs between survey costs, remediation costs, and survey frequency and to analyze the sensitivity to leakage probabilities, and carbon tax. The model can be useful in determining a monitoring regime appropriate to a specific site's risk and set of remediation options, rather than a generic one based on a maximum downside risk threshold for CO2 storage as a whole. This may have implications on the overall costs associated with deploying Carbon capture and storage on a large scale.
A comparison of GaAs and Si hybrid solar power systems
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.; Roberts, A. S., Jr.
1977-01-01
Five different hybrid solar power systems using silicon solar cells to produce thermal and electric power are modeled and compared with a hybrid system using a GaAs cell. Among the indices determined are capital cost per unit electric power plus mechanical power, annual cost per unit electric energy, and annual cost per unit electric plus mechanical work. Current costs are taken to be $35,000/sq m for GaAs cells with an efficiency of 15% and $1000/sq m for Si cells with an efficiency of 10%. It is shown that hybrid systems can be competitive with existing methods of practical energy conversion. Limiting values for annual costs of Si and GaAs cells are calculated to be 10.3 cents/kWh and 6.8 cents/kWh, respectively. Results for both systems indicate that for a given flow rate there is an optimal operating condition for minimum cost photovoltaic output. For Si cell costs of $50/sq m optimal performance can be achieved at concentrations of about 10; for GaAs cells costing 1000/sq m, optimal performance can be obtained at concentrations of around 100. High concentration hybrid systems offer a distinct cost advantage over flat systems.
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis
NASA Astrophysics Data System (ADS)
Lozovanu, D.; Pickl, S. W.; Weber, G.-W.
2004-08-01
This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.
Impact of treatment heterogeneity on drug resistance and supply chain costs.
Spiliotopoulou, Eirini; Boni, Maciej F; Yadav, Prashant
2013-09-01
The efficacy of scarce drugs for many infectious diseases is threatened by the emergence and spread of resistance. Multiple studies show that available drugs should be used in a socially optimal way to contain drug resistance. This paper studies the tradeoff between risk of drug resistance and operational costs when using multiple drugs for a specific disease. Using a model for disease transmission and resistance spread, we show that treatment with multiple drugs, on a population level, results in better resistance-related health outcomes, but more interestingly, the marginal benefit decreases as the number of drugs used increases. We compare this benefit with the corresponding change in procurement and safety stock holding costs that result from higher drug variety in the supply chain. Using a large-scale simulation based on malaria transmission dynamics, we show that disease prevalence seems to be a less important factor when deciding the optimal width of drug assortment, compared to the duration of one episode of the disease and the price of the drug(s) used. Our analysis shows that under a wide variety of scenarios for disease prevalence and drug cost, it is optimal to simultaneously deploy multiple drugs in the population. If the drug price is high, large volume purchasing discounts are available, and disease prevalence is high, it may be optimal to use only one drug. Our model lends insights to policy makers into the socially optimal size of drug assortment for a given context.
Permanent magnet design for magnetic heat pumps using total cost minimization
NASA Astrophysics Data System (ADS)
Teyber, R.; Trevizoli, P. V.; Christiaanse, T. V.; Govindappa, P.; Niknia, I.; Rowe, A.
2017-11-01
The active magnetic regenerator (AMR) is an attractive technology for efficient heat pumps and cooling systems. The costs associated with a permanent magnet for near room temperature applications are a central issue which must be solved for broad market implementation. To address this problem, we present a permanent magnet topology optimization to minimize the total cost of cooling using a thermoeconomic cost-rate balance coupled with an AMR model. A genetic algorithm identifies cost-minimizing magnet topologies. For a fixed temperature span of 15 K and 4.2 kg of gadolinium, the optimal magnet configuration provides 3.3 kW of cooling power with a second law efficiency (ηII) of 0.33 using 16.3 kg of permanent magnet material.
Wang, Tiancai; He, Xing; Huang, Tingwen; Li, Chuandong; Zhang, Wei
2017-09-01
The economic emission dispatch (EED) problem aims to control generation cost and reduce the impact of waste gas on the environment. It has multiple constraints and nonconvex objectives. To solve it, the collective neurodynamic optimization (CNO) method, which combines heuristic approach and projection neural network (PNN), is attempted to optimize scheduling of an electrical microgrid with ten thermal generators and minimize the plus of generation and emission cost. As the objective function has non-derivative points considering valve point effect (VPE), differential inclusion approach is employed in the PNN model introduced to deal with them. Under certain conditions, the local optimality and convergence of the dynamic model for the optimization problem is analyzed. The capability of the algorithm is verified in a complicated situation, where transmission loss and prohibited operating zones are considered. In addition, the dynamic variation of load power at demand side is considered and the optimal scheduling of generators within 24 h is described. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Matott, L. S.; Hymiak, B.; Reslink, C. F.; Baxter, C.; Aziz, S.
2012-12-01
As part of the NSF-sponsored 'URGE (Undergraduate Research Group Experiences) to Compute' program, Dr. Matott has been collaborating with talented Math majors to explore the design of cost-effective systems to safeguard groundwater supplies from contaminated sites. Such activity is aided by a combination of groundwater modeling, simulation-based optimization, and high-performance computing - disciplines largely unfamiliar to the students at the outset of the program. To help train and engage the students, a number of interactive and graphical software packages were utilized. Examples include: (1) a tutorial for exploring the behavior of evolutionary algorithms and other heuristic optimizers commonly used in simulation-based optimization; (2) an interactive groundwater modeling package for exploring alternative pump-and-treat containment scenarios at a contaminated site in Billings, Montana; (3) the R software package for visualizing various concepts related to subsurface hydrology; and (4) a job visualization tool for exploring the behavior of numerical experiments run on a large distributed computing cluster. Further engagement and excitement in the program was fostered by entering (and winning) a computer art competition run by the Coalition for Academic Scientific Computation (CASC). The winning submission visualizes an exhaustively mapped optimization cost surface and dramatically illustrates the phenomena of artificial minima - valley locations that correspond to designs whose costs are only partially optimal.
Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less
A Dynamic Process Model for Optimizing the Hospital Environment Cash-Flow
NASA Astrophysics Data System (ADS)
Pater, Flavius; Rosu, Serban
2011-09-01
In this article is presented a new approach to some fundamental techniques of solving dynamic programming problems with the use of functional equations. We will analyze the problem of minimizing the cost of treatment in a hospital environment. Mathematical modeling of this process leads to an optimal control problem with a finite horizon.
Foraging optimally for home ranges
Mitchell, Michael S.; Powell, Roger A.
2012-01-01
Economic models predict behavior of animals based on the presumption that natural selection has shaped behaviors important to an animal's fitness to maximize benefits over costs. Economic analyses have shown that territories of animals are structured by trade-offs between benefits gained from resources and costs of defending them. Intuitively, home ranges should be similarly structured, but trade-offs are difficult to assess because there are no costs of defense, thus economic models of home-range behavior are rare. We present economic models that predict how home ranges can be efficient with respect to spatially distributed resources, discounted for travel costs, under 2 strategies of optimization, resource maximization and area minimization. We show how constraints such as competitors can influence structure of homes ranges through resource depression, ultimately structuring density of animals within a population and their distribution on a landscape. We present simulations based on these models to show how they can be generally predictive of home-range behavior and the mechanisms that structure the spatial distribution of animals. We also show how contiguous home ranges estimated statistically from location data can be misleading for animals that optimize home ranges on landscapes with patchily distributed resources. We conclude with a summary of how we applied our models to nonterritorial black bears (Ursus americanus) living in the mountains of North Carolina, where we found their home ranges were best predicted by an area-minimization strategy constrained by intraspecific competition within a social hierarchy. Economic models can provide strong inference about home-range behavior and the resources that structure home ranges by offering falsifiable, a priori hypotheses that can be tested with field observations.
OPTIMIZING BMP PLACEMENT AT WATERSHED-SCALE USING SUSTAIN
Watershed and stormwater managers need modeling tools to evaluate alternative plans for environmental quality restoration and protection needs in urban and developing areas. A watershed-scale decision-support system, based on cost optimization, provides an essential tool to suppo...
Use of the Collaborative Optimization Architecture for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, R. D.; Moore, A. A.; Kroo, I. M.
1996-01-01
Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization
NASA Astrophysics Data System (ADS)
Najafi, Amir Abbas; Pourahmadi, Zahra
2016-04-01
Selecting the optimal combination of assets in a portfolio is one of the most important decisions in investment management. As investment is a long term concept, looking into a portfolio optimization problem just in a single period may cause loss of some opportunities that could be exploited in a long term view. Hence, it is tried to extend the problem from single to multi-period model. We include trading costs and uncertain conditions to this model which made it more realistic and complex. Hence, we propose an efficient heuristic method to tackle this problem. The efficiency of the method is examined and compared with the results of the rolling single-period optimization and the buy and hold method which shows the superiority of the proposed method.
NASA Astrophysics Data System (ADS)
Li, Peng; Wu, Di
2018-01-01
Two competing approaches have been developed over the years for multi-echelon inventory system optimization, stochastic-service approach (SSA) and guaranteed-service approach (GSA). Although they solve the same inventory policy optimization problem in their core, they make different assumptions with regard to the role of safety stock. This paper provides a detailed comparison of the two approaches by considering operating flexibility costs in the optimization of (R, Q) policies for a continuous review serial inventory system. The results indicate the GSA model is more efficiency in solving the complicated inventory problem in terms of the computation time, and the cost difference of the two approaches is quite small.
Cocoa based agroforestry: An economic perspective in resource scarcity conflict era
NASA Astrophysics Data System (ADS)
Jumiyati, S.; Arsyad, M.; Rajindra; Pulubuhu, D. A. T.; Hadid, A.
2018-05-01
Agricultural development towards food self-sufficiency based on increasing production alone has caused the occurrence of environmental disasters that are the impact of the exploitation of natural resources resulting in the scarcity of resources. This paper describes the optimization of land area, revenue, cost (production inputs), income and use of production input based on economic and ecological aspects. In order to sustainability farming by integrating environmental and economic consideration can be made through farmers’ decision making with the goal of optimizing revenue based on cost optimization through cocoa based agroforestry model in order to cope with a resource conflict resolution.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.
1988-01-01
A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.
Postaudit of optimal conjunctive use policies
Nishikawa, Tracy; Martin, Peter; ,
1998-01-01
A simulation-optimization model was developed for the optimal management of the city of Santa Barbara's water resources during a drought; however, this model addressed only groundwater flow and not the advective-dispersive, density-dependent transport of seawater. Zero-m freshwater head constraints at the coastal boundary were used as surrogates for the control of seawater intrusion. In this study, the strategies derived from the simulation-optimization model using two surface water supply scenarios are evaluated using a two-dimensional, density-dependent groundwater flow and transport model. Comparisons of simulated chloride mass fractions are made between maintaining the actual pumping policies of the 1987-91 drought and implementing the optimal pumping strategies for each scenario. The results indicate that using 0-m freshwater head constraints allowed no more seawater intrusion than under actual 1987-91 drought conditions and that the simulation-optimization model yields least-cost strategies that deliver more water than under actual drought conditions while controlling seawater intrusion.
Solving lot-sizing problem with quantity discount and transportation cost
NASA Astrophysics Data System (ADS)
Lee, Amy H. I.; Kang, He-Yau; Lai, Chun-Mei
2013-04-01
Owing to today's increasingly competitive market and ever-changing manufacturing environment, the inventory problem is becoming more complicated to solve. The incorporation of heuristics methods has become a new trend to tackle the complex problem in the past decade. This article considers a lot-sizing problem, and the objective is to minimise total costs, where the costs include ordering, holding, purchase and transportation costs, under the requirement that no inventory shortage is allowed in the system. We first formulate the lot-sizing problem as a mixed integer programming (MIP) model. Next, an efficient genetic algorithm (GA) model is constructed for solving large-scale lot-sizing problems. An illustrative example with two cases in a touch panel manufacturer is used to illustrate the practicality of these models, and a sensitivity analysis is applied to understand the impact of the changes in parameters to the outcomes. The results demonstrate that both the MIP model and the GA model are effective and relatively accurate tools for determining the replenishment for touch panel manufacturing for multi-periods with quantity discount and batch transportation. The contributions of this article are to construct an MIP model to obtain an optimal solution when the problem is not too complicated itself and to present a GA model to find a near-optimal solution efficiently when the problem is complicated.
Dynamic remapping of parallel computations with varying resource demands
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Saltz, J. H.
1986-01-01
A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.
A thermal vacuum test optimization procedure
NASA Technical Reports Server (NTRS)
Kruger, R.; Norris, H. P.
1979-01-01
An analytical model was developed that can be used to establish certain parameters of a thermal vacuum environmental test program based on an optimization of program costs. This model is in the form of a computer program that interacts with a user insofar as the input of certain parameters. The program provides the user a list of pertinent information regarding an optimized test program and graphs of some of the parameters. The model is a first attempt in this area and includes numerous simplifications. The model appears useful as a general guide and provides a way for extrapolating past performance to future missions.
Optimal Energy Extraction From a Hot Water Geothermal Reservoir
NASA Astrophysics Data System (ADS)
Golabi, Kamal; Scherer, Charles R.; Tsang, Chin Fu; Mozumder, Sashi
1981-01-01
An analytical decision model is presented for determining optimal energy extraction rates from hot water geothermal reservoirs when cooled brine is reinjected into the hot water aquifer. This applied economic management model computes the optimal fluid pumping rate and reinjection temperature and the project (reservoir) life consistent with maximum present worth of the net revenues from sales of energy for space heating. The real value of product energy is assumed to increase with time, as is the cost of energy used in pumping the aquifer. The economic model is implemented by using a hydrothermal model that relates hydraulic pumping rate to the quality (temperature) of remaining heat energy in the aquifer. The results of a numerical application to space heating show that profit-maximizing extraction rate increases with interest (discount) rate and decreases as the rate of rise of real energy value increases. The economic life of the reservoir generally varies inversely with extraction rate. Results were shown to be sensitive to permeability, initial equilibrium temperature, well cost, and well life.
Biofuel supply chain considering depreciation cost of installed plants
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Ramezankhani, Farshad; Giahi, Ramin; Farshbaf-Geranmayeh, Amir
2016-06-01
Due to the depletion of the fossil fuels and major concerns about the security of energy in the future to produce fuels, the importance of utilizing the renewable energies is distinguished. Nowadays there has been a growing interest for biofuels. Thus, this paper reveals a general optimization model which enables the selection of preprocessing centers for the biomass, biofuel plants, and warehouses to store the biofuels. The objective of this model is to maximize the total benefits. Costs of the model consist of setup cost of preprocessing centers, plants and warehouses, transportation costs, production costs, emission cost and the depreciation cost. At first, the deprecation cost of the centers is calculated by means of three methods. The model chooses the best depreciation method in each period by switching between them. A numerical example is presented and solved by CPLEX solver in GAMS software and finally, sensitivity analyses are accomplished.
Cost-Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.
Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris
2017-12-27
Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared to the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon microsimulation model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess whether optimal screening strategies would change. Colonoscopy every 5 years, starting at age 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in population is not clear. Using a Microsimulation Screening Analysis-Colon microsimulation model, we found screening of patients with cystic fibrosis for CRC to be cost-effective. Due to the higher risk in these patients for CRC, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.
Cost Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.
Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris
2018-02-01
Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared with the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess if optimal screening strategies would change. Colonoscopy every 5 years, starting at an age of 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in this population is not clear. Using a Microsimulation Screening Analysis-Colon model, we found screening of patients with cystic fibrosis for CRC to be cost effective. Because of the higher risk of CRC in these patients, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
Optimization Model for Reducing Emissions of Greenhouse ...
The EPA Vehicle Greenhouse Gas (VGHG) model is used to apply various technologies to a defined set of vehicles in order to meet a specified GHG emission target, and to then calculate the costs and benefits of doing so. To facilitate its analysis of the costs and benefits of the control of GHG emissions from cars and trucks.
A random optimization approach for inherent optic properties of nearshore waters
NASA Astrophysics Data System (ADS)
Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng
2016-10-01
Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.
Optimization of wastewater treatment plant operation for greenhouse gas mitigation.
Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C
2015-11-01
This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar
2018-07-01
The present study aims at optimizing the heat transmission parameters such as Nusselt number and friction factor in a small double pipe heat exchanger equipped with rotating spiral tapes cut as triangles and filled with aluminum oxide nanofluid. The effects of Reynolds number, twist ratio (y/w), rotating twisted tape and concentration (w%) on the Nusselt number and friction factor are also investigated. The central composite design and the response surface methodology are used for evaluating the responses necessary for optimization. According to the optimal curves, the most optimized value obtained for Nusselt number and friction factor was 146.6675 and 0.06020, respectively. Finally, an appropriate correlation is also provided to achieve the optimal model of the minimum cost. Optimization results showed that the cost has decreased in the best case.
NASA Astrophysics Data System (ADS)
Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar
2018-02-01
The present study aims at optimizing the heat transmission parameters such as Nusselt number and friction factor in a small double pipe heat exchanger equipped with rotating spiral tapes cut as triangles and filled with aluminum oxide nanofluid. The effects of Reynolds number, twist ratio (y/w), rotating twisted tape and concentration (w%) on the Nusselt number and friction factor are also investigated. The central composite design and the response surface methodology are used for evaluating the responses necessary for optimization. According to the optimal curves, the most optimized value obtained for Nusselt number and friction factor was 146.6675 and 0.06020, respectively. Finally, an appropriate correlation is also provided to achieve the optimal model of the minimum cost. Optimization results showed that the cost has decreased in the best case.
REopt: A Platform for Energy System Integration and Optimization: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkins, T.; Cutler, D.; Anderson, K.
2014-08-01
REopt is NREL's energy planning platform offering concurrent, multi-technology integration and optimization capabilities to help clients meet their cost savings and energy performance goals. The REopt platform provides techno-economic decision-support analysis throughout the energy planning process, from agency-level screening and macro planning to project development to energy asset operation. REopt employs an integrated approach to optimizing a site?s energy costs by considering electricity and thermal consumption, resource availability, complex tariff structures including time-of-use, demand and sell-back rates, incentives, net-metering, and interconnection limits. Formulated as a mixed integer linear program, REopt recommends an optimally-sized mix of conventional and renewable energy, andmore » energy storage technologies; estimates the net present value associated with implementing those technologies; and provides the cost-optimal dispatch strategy for operating them at maximum economic efficiency. The REopt platform can be customized to address a variety of energy optimization scenarios including policy, microgrid, and operational energy applications. This paper presents the REopt techno-economic model along with two examples of recently completed analysis projects.« less
de Mello-Sampayo, Felipa
2014-03-01
Cost fluctuations render the outcome of any treatment switch uncertain, so that decision makers might have to wait for more information before optimally switching treatments, especially when the incremental cost per quality-adjusted life year (QALY) gained cannot be fully recovered later on. To analyze the timing of treatment switch under cost uncertainty. A dynamic stochastic model for the optimal timing of a treatment switch is developed and applied to a problem in medical decision taking, i.e. to patients with unresectable gastrointestinal stromal tumour (GIST). The theoretical model suggests that cost uncertainty reduces expected net benefit. In addition, cost volatility discourages switching treatments. The stochastic model also illustrates that as technologies become less cost competitive, the cost uncertainty becomes more dominant. With limited substitutability, higher quality of technologies will increase the demand for those technologies disregarding the cost uncertainty. The results of the empirical application suggest that the first-line treatment may be the better choice when considering lifetime welfare. Under uncertainty and irreversibility, low-risk patients must begin the second-line treatment as soon as possible, which is precisely when the second-line treatment is least valuable. As the costs of reversing current treatment impacts fall, it becomes more feasible to provide the option-preserving treatment to these low-risk individuals later on. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Economics of social trade-off: Balancing wastewater treatment cost and ecosystem damage.
Jiang, Yu; Dinar, Ariel; Hellegers, Petra
2018-04-01
We have developed a social optimization model that integrates the financial and ecological costs associated with wastewater treatment and ecosystem damage. The social optimal abatement level of water pollution is determined by finding the trade-off between the cost of pollution control and its resulting ecosystem damage. The model is applied to data from the Lake Taihu region in China to demonstrate this trade-off. A wastewater treatment cost function is estimated with a sizable sample from China, and an ecological damage cost function is estimated following an ecosystem service valuation framework. Results show that the wastewater treatment cost function has economies of scale in facility capacity, and diseconomies in pollutant removal efficiency. Results also show that a low value of the ecosystem service will lead to serious ecological damage. One important policy implication is that the assimilative capacity of the lake should be enhanced by forbidding over extraction of water from the lake. It is also suggested that more work should be done to improve the accuracy of the economic valuation. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nuh, M. Z.; Nasir, N. F.
2017-08-01
Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.
Mwanga, Gasper G; Haario, Heikki; Capasso, Vicenzo
2015-03-01
The main scope of this paper is to study the optimal control practices of malaria, by discussing the implementation of a catalog of optimal control strategies in presence of parameter uncertainties, which is typical of infectious diseases data. In this study we focus on a deterministic mathematical model for the transmission of malaria, including in particular asymptomatic carriers and two age classes in the human population. A partial qualitative analysis of the relevant ODE system has been carried out, leading to a realistic threshold parameter. For the deterministic model under consideration, four possible control strategies have been analyzed: the use of Long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic and asymptomatic individuals. The numerical results show that using optimal control the disease can be brought to a stable disease free equilibrium when all four controls are used. The Incremental Cost-Effectiveness Ratio (ICER) for all possible combinations of the disease-control measures is determined. The numerical simulations of the optimal control in the presence of parameter uncertainty demonstrate the robustness of the optimal control: the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the designing of cost-effective strategies for disease controls with multiple interventions, even under considerable uncertainty of model parameters. Copyright © 2014 Elsevier Inc. All rights reserved.
Shale gas wastewater management under uncertainty.
Zhang, Xiaodong; Sun, Alexander Y; Duncan, Ian J
2016-01-01
This work presents an optimization framework for evaluating different wastewater treatment/disposal options for water management during hydraulic fracturing (HF) operations. This framework takes into account both cost-effectiveness and system uncertainty. HF has enabled rapid development of shale gas resources. However, wastewater management has been one of the most contentious and widely publicized issues in shale gas production. The flowback and produced water (known as FP water) generated by HF may pose a serious risk to the surrounding environment and public health because this wastewater usually contains many toxic chemicals and high levels of total dissolved solids (TDS). Various treatment/disposal options are available for FP water management, such as underground injection, hazardous wastewater treatment plants, and/or reuse. In order to cost-effectively plan FP water management practices, including allocating FP water to different options and planning treatment facility capacity expansion, an optimization model named UO-FPW is developed in this study. The UO-FPW model can handle the uncertain information expressed in the form of fuzzy membership functions and probability density functions in the modeling parameters. The UO-FPW model is applied to a representative hypothetical case study to demonstrate its applicability in practice. The modeling results reflect the tradeoffs between economic objective (i.e., minimizing total-system cost) and system reliability (i.e., risk of violating fuzzy and/or random constraints, and meeting FP water treatment/disposal requirements). Using the developed optimization model, decision makers can make and adjust appropriate FP water management strategies through refining the values of feasibility degrees for fuzzy constraints and the probability levels for random constraints if the solutions are not satisfactory. The optimization model can be easily integrated into decision support systems for shale oil/gas lifecycle management. Copyright © 2015 Elsevier Ltd. All rights reserved.
Aoki, Kagari; Sato, Katsufumi; Isojunno, Saana; Narazaki, Tomoko; Miller, Patrick J O
2017-10-15
To maximize foraging duration at depth, diving mammals are expected to use the lowest cost optimal speed during descent and ascent transit and to minimize the cost of transport by achieving neutral buoyancy. Here, we outfitted 18 deep-diving long-finned pilot whales with multi-sensor data loggers and found indications that their diving strategy is associated with higher costs than those of other deep-diving toothed whales . Theoretical models predict that optimal speed is proportional to (basal metabolic rate/drag) 1/3 and therefore to body mass 0.05 The transit speed of tagged animals (2.7±0.3 m s -1 ) was substantially higher than the optimal speed predicted from body mass (1.4-1.7 m s -1 ). According to the theoretical models, this choice of high transit speed, given a similar drag coefficient (median, 0.0035) to that in other cetaceans, indicated greater basal metabolic costs during diving than for other cetaceans. This could explain the comparatively short duration (8.9±1.5 min) of their deep dives (maximum depth, 444±85 m). Hydrodynamic gliding models indicated negative buoyancy of tissue body density (1038.8±1.6 kg m -3 , ±95% credible interval, CI) and similar diving gas volume (34.6±0.6 ml kg -1 , ±95% CI) to those in other deep-diving toothed whales. High diving metabolic rate and costly negative buoyancy imply a 'spend more, gain more' strategy of long-finned pilot whales, differing from that in other deep-diving toothed whales, which limits the costs of locomotion during foraging. We also found that net buoyancy affected the optimal speed: high transit speeds gradually decreased during ascent as the whales approached neutral buoyancy owing to gas expansion. © 2017. Published by The Company of Biologists Ltd.
Dunnett, Alex J; Adjiman, Claire S; Shah, Nilay
2008-01-01
Background Lignocellulosic bioethanol technologies exhibit significant capacity for performance improvement across the supply chain through the development of high-yielding energy crops, integrated pretreatment, hydrolysis and fermentation technologies and the application of dedicated ethanol pipelines. The impact of such developments on cost-optimal plant location, scale and process composition within multiple plant infrastructures is poorly understood. A combined production and logistics model has been developed to investigate cost-optimal system configurations for a range of technological, system scale, biomass supply and ethanol demand distribution scenarios specific to European agricultural land and population densities. Results Ethanol production costs for current technologies decrease significantly from $0.71 to $0.58 per litre with increasing economies of scale, up to a maximum single-plant capacity of 550 × 106 l year-1. The development of high-yielding energy crops and consolidated bio-processing realises significant cost reductions, with production costs ranging from $0.33 to $0.36 per litre. Increased feedstock yields result in systems of eight fully integrated plants operating within a 500 × 500 km2 region, each producing between 1.24 and 2.38 × 109 l year-1 of pure ethanol. A limited potential for distributed processing and centralised purification systems is identified, requiring developments in modular, ambient pretreatment and fermentation technologies and the pipeline transport of pure ethanol. Conclusion The conceptual and mathematical modelling framework developed provides a valuable tool for the assessment and optimisation of the lignocellulosic bioethanol supply chain. In particular, it can provide insight into the optimal configuration of multiple plant systems. This information is invaluable in ensuring (near-)cost-optimal strategic development within the sector at the regional and national scale. The framework is flexible and can thus accommodate a range of processing tasks, logistical modes, by-product markets and impacting policy constraints. Significant scope for application to real-world case studies through dynamic extensions of the formulation has been identified. PMID:18662392
Watershed Management Optimization Support Tool v3
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed context that is, accou...
An effective model for ergonomic optimization applied to a new automotive assembly line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio
2016-06-08
An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assemblymore » line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.« less
AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT
NASA Astrophysics Data System (ADS)
Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi
In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.
The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization.
Noor, Elad; Flamholz, Avi; Bar-Even, Arren; Davidi, Dan; Milo, Ron; Liebermeister, Wolfram
2016-11-01
Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell's capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified.
NASA Astrophysics Data System (ADS)
Javed, Hassan; Armstrong, Peter
2015-08-01
The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.
2017-03-21
Energy and Water Projects March 21, 2017 REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of...included reduced system energy use and cost as well as improved performance driven by autonomous commissioning and optimized system control. In the end...improve system performance and reduce energy use and cost. However, implementing these solutions into the extremely heterogeneous and often
Malikopoulos, Andreas
2015-01-01
The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.more » Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.« less
Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam
2015-01-01
To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902
High Level Rule Modeling Language for Airline Crew Pairing
NASA Astrophysics Data System (ADS)
Mutlu, Erdal; Birbil, Ş. Ilker; Bülbül, Kerem; Yenigün, Hüsnü
2011-09-01
The crew pairing problem is an airline optimization problem where a set of least costly pairings (consecutive flights to be flown by a single crew) that covers every flight in a given flight network is sought. A pairing is defined by using a very complex set of feasibility rules imposed by international and national regulatory agencies, and also by the airline itself. The cost of a pairing is also defined by using complicated rules. When an optimization engine generates a sequence of flights from a given flight network, it has to check all these feasibility rules to ensure whether the sequence forms a valid pairing. Likewise, the engine needs to calculate the cost of the pairing by using certain rules. However, the rules used for checking the feasibility and calculating the costs are usually not static. Furthermore, the airline companies carry out what-if-type analyses through testing several alternate scenarios in each planning period. Therefore, embedding the implementation of feasibility checking and cost calculation rules into the source code of the optimization engine is not a practical approach. In this work, a high level language called ARUS is introduced for describing the feasibility and cost calculation rules. A compiler for ARUS is also implemented in this work to generate a dynamic link library to be used by crew pairing optimization engines.
NASA Astrophysics Data System (ADS)
Wang, S. Y.; Ho, C. C.; Chang, L. C.
2017-12-01
The public use water in Hsinchu are mainly supplied from Baoshan Reservoir, Second Baoshan Reservoir, Yongheshan Reservoir and Longen Weir. However, the increasing water demand, caused by development of the Hsinchu Science and Industrial Park, results in supply stable water getting more difficult. For stabilize water supply in Hsinchu, the study applies long-term and short-term plans to fulfill the water shortage. Developing an efficient methodology to define a cost-effective action portfolio is an important task. Hence, the study develops a novel decision model, the Stochastic Programming with Recourse Decision Model (SPRDM), to estimate a cost-effective action portfolio. The first-stage of SPRDM determine the long-term action portfolio and the portfolio accompany recourse information (the probability for water shortage event). The second-stage of SPRDM optimize the cost-effective action portfolio in response to the recourse information. In order to consider the uncertainty of reservoir sediment and demand growth, the study set 9 scenarios comprise optimistic, most likely, and pessimistic reservoir sediment and demand growth. The results show the optimal action portfolio consist of FengTain Lake and Panlon Weir, Hsinchu Desalination Plant, Domestic and Industrial Water long-term plans, and Emergency Backup Well, Irrigation Water Transference, Preliminary Water Rationing, Advanced Water Rationing and Water Transport from Other Districts short-term plans. The minimum expected cost of optimal action portfolio is NT$1.1002 billion. The results can be used as a reference for decision making because the results have considered the uncertainty of varied hydrology, reservoir sediment, and water demand growth.
Zhou, Hui Jun; Dan, Yock Young; Naidoo, Nasheen; Li, Shu Chuen; Yeoh, Khay Guan
2013-01-01
Gastric cancer (GC) surveillance based on oesophagogastroduodenoscopy (OGD) appears to be a promising strategy for GC prevention. By evaluating the cost-effectiveness of endoscopic surveillance in Singaporean Chinese, this study aimed to inform the implementation of such a program in a population with a low to intermediate GC risk. USING A REFERENCE STRATEGY OF NO OGD INTERVENTION, WE EVALUATED FOUR STRATEGIES: 2-yearly OGD surveillance, annual OGD surveillance, 2-yearly OGD screening and 2-yearly screening plus annual surveillance in Singaporean Chinese aged 50-69 years. From a perspective of the healthcare system, Markov models were built to simulate the life experience of the target population. The models projected discounted lifetime costs ($), quality adjusted life year (QALY), and incremental cost-effectiveness ratio (ICER) indicating the cost-effectiveness of each strategy against a Singapore willingness-to-pay of $46,200/QALY. Deterministic and probabilistic sensitivity analyses were used to identify the influential variables and their associated thresholds, and to quantify the influence of parameter uncertainties respectively. With an ICER of $44,098/QALY, the annual OGD surveillance was the optimal strategy while the 2-yearly surveillance was the most cost-effective strategy (ICER = $25,949/QALY). The screening-based strategies were either extendedly dominated or cost-ineffective. The cost-effectiveness heterogeneity of the four strategies was observed across age-gender subgroups. Eight influential parameters were identified each with their specific thresholds to define the choice of optimal strategy. Accounting for the model uncertainties, the probability that the annual surveillance is the optimal strategy in Singapore was 44.5%. Endoscopic surveillance is potentially cost-effective in the prevention of GC for populations at low to intermediate risk. Regarding program implementation, a detailed analysis of influential factors and their associated thresholds is necessary. Multiple strategies should be considered in order to recommend the right strategy for the right population.
Optimal Energy Management for Microgrids
NASA Astrophysics Data System (ADS)
Zhao, Zheng
Microgrid is a recent novel concept in part of the development of smart grid. A microgrid is a low voltage and small scale network containing both distributed energy resources (DERs) and load demands. Clean energy is encouraged to be used in a microgrid for economic and sustainable reasons. A microgrid can have two operational modes, the stand-alone mode and grid-connected mode. In this research, a day-ahead optimal energy management for a microgrid under both operational modes is studied. The objective of the optimization model is to minimize fuel cost, improve energy utilization efficiency and reduce gas emissions by scheduling generations of DERs in each hour on the next day. Considering the dynamic performance of battery as Energy Storage System (ESS), the model is featured as a multi-objectives and multi-parametric programming constrained by dynamic programming, which is proposed to be solved by using the Advanced Dynamic Programming (ADP) method. Then, factors influencing the battery life are studied and included in the model in order to obtain an optimal usage pattern of battery and reduce the correlated cost. Moreover, since wind and solar generation is a stochastic process affected by weather changes, the proposed optimization model is performed hourly to track the weather changes. Simulation results are compared with the day-ahead energy management model. At last, conclusions are presented and future research in microgrid energy management is discussed.
Predictive Optimal Control of Active and Passive Building Thermal Storage Inventory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor P. Henze; Moncef Krarti
2005-09-30
Cooling of commercial buildings contributes significantly to the peak demand placed on an electrical utility grid. Time-of-use electricity rates encourage shifting of electrical loads to off-peak periods at night and weekends. Buildings can respond to these pricing signals by shifting cooling-related thermal loads either by precooling the building's massive structure or the use of active thermal energy storage systems such as ice storage. While these two thermal batteries have been engaged separately in the past, this project investigated the merits of harnessing both storage media concurrently in the context of predictive optimal control. To pursue the analysis, modeling, and simulationmore » research of Phase 1, two separate simulation environments were developed. Based on the new dynamic building simulation program EnergyPlus, a utility rate module, two thermal energy storage models were added. Also, a sequential optimization approach to the cost minimization problem using direct search, gradient-based, and dynamic programming methods was incorporated. The objective function was the total utility bill including the cost of reheat and a time-of-use electricity rate either with or without demand charges. An alternative simulation environment based on TRNSYS and Matlab was developed to allow for comparison and cross-validation with EnergyPlus. The initial evaluation of the theoretical potential of the combined optimal control assumed perfect weather prediction and match between the building model and the actual building counterpart. The analysis showed that the combined utilization leads to cost savings that is significantly greater than either storage but less than the sum of the individual savings. The findings reveal that the cooling-related on-peak electrical demand of commercial buildings can be considerably reduced. A subsequent analysis of the impact of forecasting uncertainty in the required short-term weather forecasts determined that it takes only very simple short-term prediction models to realize almost all of the theoretical potential of this control strategy. Further work evaluated the impact of modeling accuracy on the model-based closed-loop predictive optimal controller to minimize utility cost. The following guidelines have been derived: For an internal heat gain dominated commercial building, reasonable geometry simplifications are acceptable without a loss of cost savings potential. In fact, zoning simplification may improve optimizer performance and save computation time. The mass of the internal structure did not show a strong effect on the optimization. Building construction characteristics were found to impact building passive thermal storage capacity. It is thus advisable to make sure the construction material is well modeled. Zone temperature setpoint profiles and TES performance are strongly affected by mismatches in internal heat gains, especially when they are underestimated. Since they are a key factor in determining the building cooling load, efforts should be made to keep the internal gain mismatch as small as possible. Efficiencies of the building energy systems affect both zone temperature setpoints and active TES operation because of the coupling of the base chiller for building precooling and the icemaking TES chiller. Relative efficiencies of the base and TES chillers will determine the balance of operation of the two chillers. The impact of mismatch in this category may be significant. Next, a parametric analysis was conducted to assess the effects of building mass, utility rate, building location and season, thermal comfort, central plant capacities, and an economizer on the cost saving performance of optimal control for active and passive building thermal storage inventory. The key findings are: (1) Heavy-mass buildings, strong-incentive time-of-use electrical utility rates, and large on-peak cooling loads will likely lead to attractive savings resulting from optimal combined thermal storage control. (2) By using economizer to take advantage of the cool fresh air during the night, the building electrical cost can be reduced by using less mechanical cooling. (3) Larger base chiller and active thermal storage capacities have the potential of shifting more cooling loads to off-peak hours and thus higher savings can be achieved. (4) Optimal combined thermal storage control with a thermal comfort penalty included in the objective function can improve the thermal comfort levels of building occupants when compared to the non-optimized base case. Lab testing conducted in the Larson HVAC Laboratory during Phase 2 showed that the EnergyPlus-based simulation was a surprisingly accurate prediction of the experiment. Therefore, actual savings of building energy costs can be expected by applying optimal controls from simulation results.« less
Pavement maintenance optimization model using Markov Decision Processes
NASA Astrophysics Data System (ADS)
Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.
2017-09-01
This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.
Optimal distribution of borehole geophones for monitoring CO2-injection-induced seismicity
NASA Astrophysics Data System (ADS)
Huang, L.; Chen, T.; Foxall, W.; Wagoner, J. L.
2016-12-01
The U.S. DOE initiative, National Risk Assessment Partnership (NRAP), aims to develop quantitative risk assessment methodologies for carbon capture, utilization and storage (CCUS). As part of tasks of the Strategic Monitoring Group of NRAP, we develop a tool for optimal design of a borehole geophones distribution for monitoring CO2-injection-induced seismicity. The tool consists of a number of steps, including building a geophysical model for a given CO2 injection site, defining target monitoring regions within CO2-injection/migration zones, generating synthetic seismic data, giving acceptable uncertainties in input data, and determining the optimal distribution of borehole geophones. We use a synthetic geophysical model as an example to demonstrate the capability our new tool to design an optimal/cost-effective passive seismic monitoring network using borehole geophones. The model is built based on the geologic features found at the Kimberlina CCUS pilot site located in southern San Joaquin Valley, California. This tool can provide CCUS operators with a guideline for cost-effective microseismic monitoring of geologic carbon storage and utilization.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
Optimal adaptation to extreme rainfalls in current and future climate
NASA Astrophysics Data System (ADS)
Rosbjerg, Dan
2017-01-01
More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases, the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time-span. Immediate as well as delayed adaptation is considered.
Optimal adaptation to extreme rainfalls under climate change
NASA Astrophysics Data System (ADS)
Rosbjerg, Dan
2017-04-01
More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time span. Immediate as well as delayed adaptation is considered.
Guevara, V R
2004-02-01
A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.
Joshi, Varun; Srinivasan, Manoj
2015-02-08
Understanding how humans walk on a surface that can move might provide insights into, for instance, whether walking humans prioritize energy use or stability. Here, motivated by the famous human-driven oscillations observed in the London Millennium Bridge, we introduce a minimal mathematical model of a biped, walking on a platform (bridge or treadmill) capable of lateral movement. This biped model consists of a point-mass upper body with legs that can exert force and perform mechanical work on the upper body. Using numerical optimization, we obtain energy-optimal walking motions for this biped, deriving the periodic body and platform motions that minimize a simple metabolic energy cost. When the platform has an externally imposed sinusoidal displacement of appropriate frequency and amplitude, we predict that body motion entrained to platform motion consumes less energy than walking on a fixed surface. When the platform has finite inertia, a mass- spring-damper with similar parameters to the Millennium Bridge, we show that the optimal biped walking motion sustains a large lateral platform oscillation when sufficiently many people walk on the bridge. Here, the biped model reduces walking metabolic cost by storing and recovering energy from the platform, demonstrating energy benefits for two features observed for walking on the Millennium Bridge: crowd synchrony and large lateral oscillations.
Joshi, Varun; Srinivasan, Manoj
2015-01-01
Understanding how humans walk on a surface that can move might provide insights into, for instance, whether walking humans prioritize energy use or stability. Here, motivated by the famous human-driven oscillations observed in the London Millennium Bridge, we introduce a minimal mathematical model of a biped, walking on a platform (bridge or treadmill) capable of lateral movement. This biped model consists of a point-mass upper body with legs that can exert force and perform mechanical work on the upper body. Using numerical optimization, we obtain energy-optimal walking motions for this biped, deriving the periodic body and platform motions that minimize a simple metabolic energy cost. When the platform has an externally imposed sinusoidal displacement of appropriate frequency and amplitude, we predict that body motion entrained to platform motion consumes less energy than walking on a fixed surface. When the platform has finite inertia, a mass- spring-damper with similar parameters to the Millennium Bridge, we show that the optimal biped walking motion sustains a large lateral platform oscillation when sufficiently many people walk on the bridge. Here, the biped model reduces walking metabolic cost by storing and recovering energy from the platform, demonstrating energy benefits for two features observed for walking on the Millennium Bridge: crowd synchrony and large lateral oscillations. PMID:25663810
Kerl, Paul Y; Zhang, Wenxian; Moreno-Cruz, Juan B; Nenes, Athanasios; Realff, Matthew J; Russell, Armistead G; Sokol, Joel; Thomas, Valerie M
2015-09-01
Integrating accurate air quality modeling with decision making is hampered by complex atmospheric physics and chemistry and its coupling with atmospheric transport. Existing approaches to model the physics and chemistry accurately lead to significant computational burdens in computing the response of atmospheric concentrations to changes in emissions profiles. By integrating a reduced form of a fully coupled atmospheric model within a unit commitment optimization model, we allow, for the first time to our knowledge, a fully dynamical approach toward electricity planning that accurately and rapidly minimizes both cost and health impacts. The reduced-form model captures the response of spatially resolved air pollutant concentrations to changes in electricity-generating plant emissions on an hourly basis with accuracy comparable to a comprehensive air quality model. The integrated model allows for the inclusion of human health impacts into cost-based decisions for power plant operation. We use the new capability in a case study of the state of Georgia over the years of 2004-2011, and show that a shift in utilization among existing power plants during selected hourly periods could have provided a health cost savings of $175.9 million dollars for an additional electricity generation cost of $83.6 million in 2007 US dollars (USD2007). The case study illustrates how air pollutant health impacts can be cost-effectively minimized by intelligently modulating power plant operations over multihour periods, without implementing additional emissions control technologies.
Kerl, Paul Y.; Zhang, Wenxian; Moreno-Cruz, Juan B.; Nenes, Athanasios; Realff, Matthew J.; Russell, Armistead G.; Sokol, Joel; Thomas, Valerie M.
2015-01-01
Integrating accurate air quality modeling with decision making is hampered by complex atmospheric physics and chemistry and its coupling with atmospheric transport. Existing approaches to model the physics and chemistry accurately lead to significant computational burdens in computing the response of atmospheric concentrations to changes in emissions profiles. By integrating a reduced form of a fully coupled atmospheric model within a unit commitment optimization model, we allow, for the first time to our knowledge, a fully dynamical approach toward electricity planning that accurately and rapidly minimizes both cost and health impacts. The reduced-form model captures the response of spatially resolved air pollutant concentrations to changes in electricity-generating plant emissions on an hourly basis with accuracy comparable to a comprehensive air quality model. The integrated model allows for the inclusion of human health impacts into cost-based decisions for power plant operation. We use the new capability in a case study of the state of Georgia over the years of 2004–2011, and show that a shift in utilization among existing power plants during selected hourly periods could have provided a health cost savings of $175.9 million dollars for an additional electricity generation cost of $83.6 million in 2007 US dollars (USD2007). The case study illustrates how air pollutant health impacts can be cost-effectively minimized by intelligently modulating power plant operations over multihour periods, without implementing additional emissions control technologies. PMID:26283358
Computer-aided communication satellite system analysis and optimization
NASA Technical Reports Server (NTRS)
Stagl, T. W.; Morgan, N. H.; Morley, R. E.; Singh, J. P.
1973-01-01
The capabilities and limitations of the various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. A satellite Telecommunication analysis and Modeling Program (STAMP) for costing and sensitivity analysis work in application of communication satellites to educational development is given. The modifications made to STAMP include: extension of the six beam capability to eight; addition of generation of multiple beams from a single reflector system with an array of feeds; an improved system costing to reflect the time value of money, growth in earth terminal population with time, and to account for various measures of system reliability; inclusion of a model for scintillation at microwave frequencies in the communication link loss model; and, an updated technological environment.
Watershed Management Optimization Support Tool (WMOST) v3: User Guide
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed context that is, accou...
Watershed Management Optimization Support Tool (WMOST) v3: Theoretical Documentation
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed context, accounting fo...
Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.
Wang, Xinghu; Hong, Yiguang; Ji, Haibo
2016-07-01
The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkins, Travis; Cutler, Dylan; Hirsch, Brian
There are thousands of isolated, diesel-powered microgrids that deliver energy to remote communities around the world at very high energy costs. The Remote Communities Renewable Energy program aims to help these communities reduce their fuel consumption and lower their energy costs through the use of high penetration renewable energy. As part of this program, the REopt modeling platform for energy system integration and optimization was used to analyze cost-optimal pathways toward achieving a combined 75% reduction in diesel fuel and fuel oil consumption in a select Alaskan village. In addition to the existing diesel generator and fuel oil heating technologies,more » the model was able to select from among wind, battery storage, and dispatchable electric heaters to meet the electrical and thermal loads. The model results indicate that while 75% fuel reduction appears to be technically feasible it may not be economically viable at this time. When the fuel reduction target was relaxed, the results indicate that by installing high-penetration renewable energy, the community could lower their energy costs by 21% while still reducing their fuel consumption by 54%.« less
Two-stage collaborative global optimization design model of the CHPG microgrid
NASA Astrophysics Data System (ADS)
Liao, Qingfen; Xu, Yeyan; Tang, Fei; Peng, Sicheng; Yang, Zheng
2017-06-01
With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.
Solving bi-level optimization problems in engineering design using kriging models
NASA Astrophysics Data System (ADS)
Xia, Yi; Liu, Xiaojie; Du, Gang
2018-05-01
Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tao; Li, Cheng; Huang, Can
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Ding, Tao; Li, Cheng; Huang, Can; ...
2017-01-09
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Model-as-a-service (MaaS) using the cloud service innovation platform (CSIP)
USDA-ARS?s Scientific Manuscript database
Cloud infrastructures for modelling activities such as data processing, performing environmental simulations, or conducting model calibrations/optimizations provide a cost effective alternative to traditional high performance computing approaches. Cloud-based modelling examples emerged into the more...
Socially optimal replacement of conventional with electric vehicles for the US household fleet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontou, Eleftheria; Yin, Yafeng; Lin, Zhenhong
In this study, a framework is proposed for minimizing the societal cost of replacing gas-powered household passenger cars with battery electric ones (BEVs). The societal cost consists of operational costs of heterogeneous driving patterns' cars, the government investments for charging deployment, and monetized environmental externalities. The optimization framework determines the timeframe needed for conventional vehicles to be replaced with BEVs. It also determines the BEVs driving range during the planning timeframe, as well as the density of public chargers deployed on a linear transportation network over time. We leverage datasets that represent U.S. household driving patterns, as well as themore » automobile and the energy markets, to apply the model. Results indicate that it takes 8 years for 80% of our conventional vehicle sample to be replaced with electric vehicles, under the base case scenario. The socially optimal all-electric driving range is 204 miles, with chargers placed every 172 miles on a linear corridor. All of the public chargers should be deployed at the beginning of the planning horizon to achieve greater savings over the years. Sensitivity analysis reveals that the timeframe for the socially optimal conversion of 80% of the sample varies from 6 to 12 years. The optimal decision variables are sensitive to battery pack and vehicle body cost, gasoline cost, the discount rate, and conventional vehicles' fuel economy. In conclusion, faster conventional vehicle replacement is achieved when the gasoline cost increases, electricity cost decreases, and battery packs become cheaper over the years.« less
Socially optimal replacement of conventional with electric vehicles for the US household fleet
Kontou, Eleftheria; Yin, Yafeng; Lin, Zhenhong; ...
2017-04-05
In this study, a framework is proposed for minimizing the societal cost of replacing gas-powered household passenger cars with battery electric ones (BEVs). The societal cost consists of operational costs of heterogeneous driving patterns' cars, the government investments for charging deployment, and monetized environmental externalities. The optimization framework determines the timeframe needed for conventional vehicles to be replaced with BEVs. It also determines the BEVs driving range during the planning timeframe, as well as the density of public chargers deployed on a linear transportation network over time. We leverage datasets that represent U.S. household driving patterns, as well as themore » automobile and the energy markets, to apply the model. Results indicate that it takes 8 years for 80% of our conventional vehicle sample to be replaced with electric vehicles, under the base case scenario. The socially optimal all-electric driving range is 204 miles, with chargers placed every 172 miles on a linear corridor. All of the public chargers should be deployed at the beginning of the planning horizon to achieve greater savings over the years. Sensitivity analysis reveals that the timeframe for the socially optimal conversion of 80% of the sample varies from 6 to 12 years. The optimal decision variables are sensitive to battery pack and vehicle body cost, gasoline cost, the discount rate, and conventional vehicles' fuel economy. In conclusion, faster conventional vehicle replacement is achieved when the gasoline cost increases, electricity cost decreases, and battery packs become cheaper over the years.« less
Using Cotton Model Simulations to Estimate Optimally Profitable Irrigation Strategies
NASA Astrophysics Data System (ADS)
Mauget, S. A.; Leiker, G.; Sapkota, P.; Johnson, J.; Maas, S.
2011-12-01
In recent decades irrigation pumping from the Ogallala Aquifer has led to declines in saturated thickness that have not been compensated for by natural recharge, which has led to questions about the long-term viability of agriculture in the cotton producing areas of west Texas. Adopting irrigation management strategies that optimize profitability while reducing irrigation waste is one way of conserving the aquifer's water resource. Here, a database of modeled cotton yields generated under drip and center pivot irrigated and dryland production scenarios is used in a stochastic dominance analysis that identifies such strategies under varying commodity price and pumping cost conditions. This database and analysis approach will serve as the foundation for a web-based decision support tool that will help producers identify optimal irrigation treatments under specified cotton price, electricity cost, and depth to water table conditions.
Bustillo-Lecompte, Ciro Fernando; Mehrvar, Mehrab
2016-11-01
Biological and advanced oxidation processes are combined to treat an actual slaughterhouse wastewater (SWW) by a sequence of an anaerobic baffled reactor, an aerobic activated sludge reactor, and a UV/H2O2 photoreactor with recycle in continuous mode at laboratory scale. In the first part of this study, quadratic modeling along with response surface methodology are used for the statistical analysis and optimization of the combined process. The effects of the influent total organic carbon (TOC) concentration, the flow rate, the pH, the inlet H2O2 concentration, and their interaction on the overall treatment efficiency, CH4 yield, and H2O2 residual in the effluent of the photoreactor are investigated. The models are validated at different operating conditions using experimental data. Maximum TOC and total nitrogen (TN) removals of 91.29 and 86.05%, respectively, maximum CH4 yield of 55.72%, and minimum H2O2 residual of 1.45% in the photoreactor effluent were found at optimal operating conditions. In the second part of this study, continuous distribution kinetics is applied to establish a mathematical model for the degradation of SWW as a function of time. The agreement between model predictions and experimental values indicates that the proposed model could describe the performance of the combined anaerobic-aerobic-UV/H2O2 processes for the treatment of SWW. In the final part of the study, the optimized combined anaerobic-aerobic-UV/H2O2 processes with recycle were evaluated using a cost-effectiveness analysis to minimize the retention time, the electrical energy consumption, and the overall incurred treatment costs required for the efficient treatment of slaughterhouse wastewater effluents. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Mathematical Model for Allocation of School Resources to Optimize a Selected Output.
ERIC Educational Resources Information Center
McAfee, Jackson K.
The methodology of costing an education program by identifying the resources it utilizes places all costs within the framework of staff, equipment, materials, facilities, and services. This paper suggests that this methodology is much stronger than the more traditional budgetary and cost per pupil approach. The techniques of data collection are…
Asset Prices and Trading Volume under Fixed Transactions Costs.
ERIC Educational Resources Information Center
Lo, Andrew W.; Mamaysky, Harry; Wang, Jiang
2004-01-01
We propose a dynamic equilibrium model of asset prices and trading volume when agents face fixed transactions costs. We show that even small fixed costs can give rise to large "no-trade" regions for each agent's optimal trading policy. The inability to trade more frequently reduces the agents' asset demand and in equilibrium gives rise to a…
NASA Astrophysics Data System (ADS)
Rodriguez, Hector German; Popp, Jennie; Maringanti, Chetan; Chaubey, Indrajeet
2011-01-01
An increased loss of agricultural nutrients is a growing concern for water quality in Arkansas. Several studies have shown that best management practices (BMPs) are effective in controlling water pollution. However, those affected with water quality issues need water management plans that take into consideration BMPs selection, placement, and affordability. This study used a nondominated sorting genetic algorithm (NSGA-II). This multiobjective algorithm selects and locates BMPs that minimize nutrients pollution cost-effectively by providing trade-off curves (optimal fronts) between pollutant reduction and total net cost increase. The usefulness of this optimization framework was evaluated in the Lincoln Lake watershed. The final NSGA-II optimization model generated a number of near-optimal solutions by selecting from 35 BMPs (combinations of pasture management, buffer zones, and poultry litter application practices). Selection and placement of BMPs were analyzed under various cost solutions. The NSGA-II provides multiple solutions that could fit the water management plan for the watershed. For instance, by implementing all the BMP combinations recommended in the lowest-cost solution, total phosphorous (TP) could be reduced by at least 76% while increasing cost by less than 2% in the entire watershed. This value represents an increase in cost of 5.49 ha-1 when compared to the baseline. Implementing all the BMP combinations proposed with the medium- and the highest-cost solutions could decrease TP drastically but will increase cost by 24,282 (7%) and $82,306 (25%), respectively.
Fey, Nicholas P; Klute, Glenn K; Neptune, Richard R
2012-11-01
Unilateral below-knee amputees develop abnormal gait characteristics that include bilateral asymmetries and an elevated metabolic cost relative to non-amputees. In addition, long-term prosthesis use has been linked to an increased prevalence of joint pain and osteoarthritis in the intact leg knee. To improve amputee mobility, prosthetic feet that utilize elastic energy storage and return (ESAR) have been designed, which perform important biomechanical functions such as providing body support and forward propulsion. However, the prescription of appropriate design characteristics (e.g., stiffness) is not well-defined since its influence on foot function and important in vivo biomechanical quantities such as metabolic cost and joint loading remain unclear. The design of feet that improve these quantities could provide considerable advancements in amputee care. Therefore, the purpose of this study was to couple design optimization with dynamic simulations of amputee walking to identify the optimal foot stiffness that minimizes metabolic cost and intact knee joint loading. A musculoskeletal model and distributed stiffness ESAR prosthetic foot model were developed to generate muscle-actuated forward dynamics simulations of amputee walking. Dynamic optimization was used to solve for the optimal muscle excitation patterns and foot stiffness profile that produced simulations that tracked experimental amputee walking data while minimizing metabolic cost and intact leg internal knee contact forces. Muscle and foot function were evaluated by calculating their contributions to the important walking subtasks of body support, forward propulsion and leg swing. The analyses showed that altering a nominal prosthetic foot stiffness distribution by stiffening the toe and mid-foot while making the ankle and heel less stiff improved ESAR foot performance by offloading the intact knee during early to mid-stance of the intact leg and reducing metabolic cost. The optimal design also provided moderate braking and body support during the first half of residual leg stance, while increasing the prosthesis contributions to forward propulsion and body support during the second half of residual leg stance. Future work will be directed at experimentally validating these results, which have important implications for future designs of prosthetic feet that could significantly improve amputee care.
The admissible portfolio selection problem with transaction costs and an improved PSO algorithm
NASA Astrophysics Data System (ADS)
Chen, Wei; Zhang, Wei-Guo
2010-05-01
In this paper, we discuss the portfolio selection problem with transaction costs under the assumption that there exist admissible errors on expected returns and risks of assets. We propose a new admissible efficient portfolio selection model and design an improved particle swarm optimization (PSO) algorithm because traditional optimization algorithms fail to work efficiently for our proposed problem. Finally, we offer a numerical example to illustrate the proposed effective approaches and compare the admissible portfolio efficient frontiers under different constraints.
NASA Astrophysics Data System (ADS)
Braun, Robert Joseph
The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.
Systems and methods for energy cost optimization in a building system
Turney, Robert D.; Wenzel, Michael J.
2016-09-06
Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.
NASA Astrophysics Data System (ADS)
Bogoljubova, M. N.; Afonasov, A. I.; Kozlov, B. N.; Shavdurov, D. E.
2018-05-01
A predictive simulation technique of optimal cutting modes in the turning of workpieces made of nickel-based heat-resistant alloys, different from the well-known ones, is proposed. The impact of various factors on the cutting process with the purpose of determining optimal parameters of machining in concordance with certain effectiveness criteria is analyzed in the paper. A mathematical model of optimization, algorithms and computer programmes, visual graphical forms reflecting dependences of the effectiveness criteria – productivity, net cost, and tool life on parameters of the technological process - have been worked out. A nonlinear model for multidimensional functions, “solution of the equation with multiple unknowns”, “a coordinate descent method” and heuristic algorithms are accepted to solve the problem of optimization of cutting mode parameters. Research shows that in machining of workpieces made from heat-resistant alloy AISI N07263, the highest possible productivity will be achieved with the following parameters: cutting speed v = 22.1 m/min., feed rate s=0.26 mm/rev; tool life T = 18 min.; net cost – 2.45 per hour.
Economic and environmental optimization of a multi-site utility network for an industrial complex.
Kim, Sang Hun; Yoon, Sung-Geun; Chae, Song Hwa; Park, Sunwon
2010-01-01
Most chemical companies consume a lot of steam, water and electrical resources in the production process. Given recent record fuel costs, utility networks must be optimized to reduce the overall cost of production. Environmental concerns must also be considered when preparing modifications to satisfy the requirements for industrial utilities, since wastes discharged from the utility networks are restricted by environmental regulations. Construction of Eco-Industrial Parks (EIPs) has drawn attention as a promising approach for retrofitting existing industrial parks to improve energy efficiency. The optimization of the utility network within an industrial complex is one of the most important undertakings to minimize energy consumption and waste loads in the EIP. In this work, a systematic approach to optimize the utility network of an industrial complex is presented. An important issue in the optimization of a utility network is the desire of the companies to achieve high profits while complying with the environmental regulations. Therefore, the proposed optimization was performed with consideration of both economic and environmental factors. The proposed approach consists of unit modeling using thermodynamic principles, mass and energy balances, development of a multi-period Mixed Integer Linear Programming (MILP) model for the integration of utility systems in an industrial complex, and an economic/environmental analysis of the results. This approach is applied to the Yeosu Industrial Complex, considering seasonal utility demands. The results show that both the total utility cost and waste load are reduced by optimizing the utility network of an industrial complex. 2009 Elsevier Ltd. All rights reserved.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Water-resources optimization model for Santa Barbara, California
Nishikawa, Tracy
1998-01-01
A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.
Assessing recovery feasibility for piping plovers using optimization and simulation
Larson, M.A.; Ryan, M.R.; Murphy, R.K.
2003-01-01
Optimization and simulation modeling can be used to account for demographic and economic factors simultaneously in a comprehensive analysis of endangered-species population recovery. This is a powerful approach that is broadly applicable but underutilized in conservation biology. We applied the approach to a population recovery analysis of threatened and endangered piping plovers (Charadrius melodus) in the Great Plains of North America. Predator exclusion increases the reproductive success of piping plovers, but the most cost-efficient strategy of applying predator exclusion and the number of protected breeding pairs necessary to prevent further population declines were unknown. We developed a linear programming model to define strategies that would either maximize fledging rates or minimize financial costs by allocating plover pairs to 1 of 6 types of protection. We evaluated the optimal strategies using a stochastic population simulation model. The minimum cost to achieve a 20% chance of stabilizing simulated populations was approximately $1-11 million over 50 years. Increasing reproductive success to 1.24 fledglings/pair at minimal cost in any given area required fencing 85% of pairs at managed sites but cost 23% less than the current approach. Maximum fledging rates resulted in >20% of simulated populations reaching recovery goals in 30-50 years at cumulative costs of <$16 million. Protecting plover pairs within 50 km of natural resource agency field offices was sufficient to increase simulated populations to established recovery goals. A range-wide management plan needs to be developed and implemented to foster the involvement and cooperation among managers that will be necessary for recovery efforts to be successful. We also discuss how our approach can be applied to a variety of wildlife management issues.
Cost and benefits design optimization model for fault tolerant flight control systems
NASA Technical Reports Server (NTRS)
Rose, J.
1982-01-01
Requirements and specifications for a method of optimizing the design of fault-tolerant flight control systems are provided. Algorithms that could be used for developing new and modifying existing computer programs are also provided, with recommendations for follow-on work.
An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2015-10-21
To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.
NASA Astrophysics Data System (ADS)
Pathak, Savita; Mondal, Seema Sarkar
2010-10-01
A multi-objective inventory model of deteriorating item has been developed with Weibull rate of decay, time dependent demand, demand dependent production, time varying holding cost allowing shortages in fuzzy environments for non- integrated and integrated businesses. Here objective is to maximize the profit from different deteriorating items with space constraint. The impreciseness of inventory parameters and goals for non-integrated business has been expressed by linear membership functions. The compromised solutions are obtained by different fuzzy optimization methods. To incorporate the relative importance of the objectives, the different cardinal weights crisp/fuzzy have been assigned. The models are illustrated with numerical examples and results of models with crisp/fuzzy weights are compared. The result for the model assuming them to be integrated business is obtained by using Generalized Reduced Gradient Method (GRG). The fuzzy integrated model with imprecise inventory cost is formulated to optimize the possibility necessity measure of fuzzy goal of the objective function by using credibility measure of fuzzy event by taking fuzzy expectation. The results of crisp/fuzzy integrated model are illustrated with numerical examples and results are compared.
Cha, E; Kristensen, A R; Hertl, J A; Schukken, Y H; Tauer, L W; Welcome, F L; Gröhn, Y T
2014-01-01
Mastitis is a serious production-limiting disease, with effects on milk yield, milk quality, and conception rate, and an increase in the risk of mortality and culling. The objective of this study was 2-fold: (1) to develop an economic optimization model that incorporates all the different types of pathogens that cause clinical mastitis (CM) categorized into 8 classes of culture results, and account for whether the CM was a first, second, or third case in the current lactation and whether the cow had a previous case or cases of CM in the preceding lactation; and (2) to develop this decision model to be versatile enough to add additional pathogens, diseases, or other cow characteristics as more information becomes available without significant alterations to the basic structure of the model. The model provides economically optimal decisions depending on the individual characteristics of the cow and the specific pathogen causing CM. The net returns for the basic herd scenario (with all CM included) were $507/cow per year, where the incidence of CM (cases per 100 cow-years) was 35.6, of which 91.8% of cases were recommended for treatment under an optimal replacement policy. The cost per case of CM was $216.11. The CM cases comprised (incidences, %) Staphylococcus spp. (1.6), Staphylococcus aureus (1.8), Streptococcus spp. (6.9), Escherichia coli (8.1), Klebsiella spp. (2.2), other treated cases (e.g., Pseudomonas; 1.1), other not treated cases (e.g., Trueperella pyogenes; 1.2), and negative culture cases (12.7). The average cost per case, even under optimal decisions, was greatest for Klebsiella spp. ($477), followed by E. coli ($361), other treated cases ($297), and other not treated cases ($280). This was followed by the gram-positive pathogens; among these, the greatest cost per case was due to Staph. aureus ($266), followed by Streptococcus spp. ($174) and Staphylococcus spp. ($135); negative culture had the lowest cost ($115). The model recommended treatment for most CM cases (>85%); the range was 86.2% (Klebsiella spp.) to 98.5% (Staphylococcus spp.). In general, the optimal recommended time for replacement was up to 5 mo earlier for cows with CM compared with cows without CM. Furthermore, although the parameter estimates implemented in this model are applicable to the dairy farms in this study, the parameters may be altered to be specific to other dairy farms. Cow rankings and values based on disease status, pregnancy status, and milk production can be extracted; these provide guidance when determining which cows to keep or cull. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dietze, M. C.; Davidson, C. D.; Desai, A. R.; Feng, X.; Kelly, R.; Kooper, R.; LeBauer, D. S.; Mantooth, J.; McHenry, K.; Serbin, S. P.; Wang, D.
2012-12-01
Ecosystem models are designed to synthesize our current understanding of how ecosystems function and to predict responses to novel conditions, such as climate change. Reducing uncertainties in such models can thus improve both basic scientific understanding and our predictive capacity, but rarely have the models themselves been employed in the design of field campaigns. In the first part of this paper we provide a synthesis of uncertainty analyses conducted using the Predictive Ecosystem Analyzer (PEcAn) ecoinformatics workflow on the Ecosystem Demography model v2 (ED2). This work spans a number of projects synthesizing trait databases and using Bayesian data assimilation techniques to incorporate field data across temperate forests, grasslands, agriculture, short rotation forestry, boreal forests, and tundra. We report on a number of data needs that span a wide array diverse biomes, such as the need for better constraint on growth respiration. We also identify other data needs that are biome specific, such as reproductive allocation in tundra, leaf dark respiration in forestry and early-successional trees, and root allocation and turnover in mid- and late-successional trees. Future data collection needs to balance the unequal distribution of past measurements across biomes (temperate biased) and processes (aboveground biased) with the sensitivities of different processes. In the second part we present the development of a power analysis and sampling optimization module for the the PEcAn system. This module uses the results of variance decomposition analyses to estimate the further reduction in model predictive uncertainty for different sample sizes of different variables. By assigning a cost to each measurement type, we apply basic economic theory to optimize the reduction in model uncertainty for any total expenditure, or to determine the cost required to reduce uncertainty to a given threshold. Using this system we find that sampling switches among multiple measurement types but favors those with no prior measurements due to the need integrate over prior uncertainty in within and among site variability. When starting from scratch in a new system, the optimal design favors initial measurements of SLA due to high sensitivity and low cost. The value of many data types, such as photosynthetic response curves, depends strongly on whether one includes initial equipment costs or just per-sample costs. Similarly, sampling at previously measured locations is favored when infrastructure costs are high, otherwise across-site sampling is favored over intensive sampling except when within-site variability strongly dominates.
NASA Astrophysics Data System (ADS)
Kolosionis, Konstantinos; Papadopoulou, Maria P.
2017-04-01
Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.
Power system modeling and optimization methods vis-a-vis integrated resource planning (IRP)
NASA Astrophysics Data System (ADS)
Arsali, Mohammad H.
1998-12-01
The state-of-the-art restructuring of power industries is changing the fundamental nature of retail electricity business. As a result, the so-called Integrated Resource Planning (IRP) strategies implemented on electric utilities are also undergoing modifications. Such modifications evolve from the imminent considerations to minimize the revenue requirements and maximize electrical system reliability vis-a-vis capacity-additions (viewed as potential investments). IRP modifications also provide service-design bases to meet the customer needs towards profitability. The purpose of this research as deliberated in this dissertation is to propose procedures for optimal IRP intended to expand generation facilities of a power system over a stretched period of time. Relevant topics addressed in this research towards IRP optimization are as follows: (1) Historical prospective and evolutionary aspects of power system production-costing models and optimization techniques; (2) A survey of major U.S. electric utilities adopting IRP under changing socioeconomic environment; (3) A new technique designated as the Segmentation Method for production-costing via IRP optimization; (4) Construction of a fuzzy relational database of a typical electric power utility system for IRP purposes; (5) A genetic algorithm based approach for IRP optimization using the fuzzy relational database.
NASA Astrophysics Data System (ADS)
Sidibe, Souleymane
The implementation and monitoring of operational flight plans is a major occupation for a crew of commercial flights. The purpose of this operation is to set the vertical and lateral trajectories followed by airplane during phases of flight: climb, cruise, descent, etc. These trajectories are subjected to conflicting economical constraints: minimization of flight time and minimization of fuel consumed and environmental constraints. In its task of mission planning, the crew is assisted by the Flight Management System (FMS) which is used to construct the path to follow and to predict the behaviour of the aircraft along the flight plan. The FMS considered in our research, particularly includes an optimization model of flight only by calculating the optimal speed profile that minimizes the overall cost of flight synthesized by a criterion of cost index following a steady cruising altitude. However, the model based solely on optimization of the speed profile is not sufficient. It is necessary to expand the current optimization for simultaneous optimization of the speed and altitude in order to determine an optimum cruise altitude that minimizes the overall cost when the path is flown with the optimal speed profile. Then, a new program was developed. The latter is based on the method of dynamic programming invented by Bellman to solve problems of optimal paths. In addition, the improvement passes through research new patterns of trajectories integrating ascendant cruises and using the lateral plane with the effect of the weather: wind and temperature. Finally, for better optimization, the program takes into account constraint of flight domain of aircrafts which utilize the FMS.
Integrated modeling approach for optimal management of water, energy and food security nexus
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Vesselinov, Velimir V.
2017-03-01
Water, energy and food (WEF) are inextricably interrelated. Effective planning and management of limited WEF resources to meet current and future socioeconomic demands for sustainable development is challenging. WEF production/delivery may also produce environmental impacts; as a result, green-house-gas emission control will impact WEF nexus management as well. Nexus management for WEF security necessitates integrated tools for predictive analysis that are capable of identifying the tradeoffs among various sectors, generating cost-effective planning and management strategies and policies. To address these needs, we have developed an integrated model analysis framework and tool called WEFO. WEFO provides a multi-period socioeconomic model for predicting how to satisfy WEF demands based on model inputs representing productions costs, socioeconomic demands, and environmental controls. WEFO is applied to quantitatively analyze the interrelationships and trade-offs among system components including energy supply, electricity generation, water supply-demand, food production as well as mitigation of environmental impacts. WEFO is demonstrated to solve a hypothetical nexus management problem consistent with real-world management scenarios. Model parameters are analyzed using global sensitivity analysis and their effects on total system cost are quantified. The obtained results demonstrate how these types of analyses can be helpful for decision-makers and stakeholders to make cost-effective decisions for optimal WEF management.
Optimal resource allocation for defense of targets based on differing measures of attractiveness.
Bier, Vicki M; Haphuriwat, Naraphorn; Menoyo, Jaime; Zimmerman, Rae; Culpen, Alison M
2008-06-01
This article describes the results of applying a rigorous computational model to the problem of the optimal defensive resource allocation among potential terrorist targets. In particular, our study explores how the optimal budget allocation depends on the cost effectiveness of security investments, the defender's valuations of the various targets, and the extent of the defender's uncertainty about the attacker's target valuations. We use expected property damage, expected fatalities, and two metrics of critical infrastructure (airports and bridges) as our measures of target attractiveness. Our results show that the cost effectiveness of security investment has a large impact on the optimal budget allocation. Also, different measures of target attractiveness yield different optimal budget allocations, emphasizing the importance of developing more realistic terrorist objective functions for use in budget allocation decisions for homeland security.
Robust optimization-based DC optimal power flow for managing wind generation uncertainty
NASA Astrophysics Data System (ADS)
Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn
2012-11-01
Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.
Piezoresistive Cantilever Performance—Part II: Optimization
Park, Sung-Jin; Doll, Joseph C.; Rastegar, Ali J.; Pruitt, Beth L.
2010-01-01
Piezoresistive silicon cantilevers fabricated by ion implantation are frequently used for force, displacement, and chemical sensors due to their low cost and electronic readout. However, the design of piezoresistive cantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. We systematically analyzed the effect of design and process parameters on force resolution and then developed an optimization approach to improve force resolution while satisfying various design constraints using simulation results. The combined simulation and optimization approach is extensible to other doping methods beyond ion implantation in principle. The optimization results were validated by fabricating cantilevers with the optimized conditions and characterizing their performance. The measurement results demonstrate that the analytical model accurately predicts force and displacement resolution, and sensitivity and noise tradeoff in optimal cantilever performance. We also performed a comparison between our optimization technique and existing models and demonstrated eight times improvement in force resolution over simplified models. PMID:20333323
A model for interprovincial air pollution control based on futures prices.
Zhao, Laijun; Xue, Jian; Gao, Huaizhu Oliver; Li, Changmin; Huang, Rongbing
2014-05-01
Based on the current status of research on tradable emission rights futures, this paper introduces basic market-related assumptions for China's interprovincial air pollution control problem. The authors construct an interprovincial air pollution control model based on futures prices: the model calculated the spot price of emission rights using a classic futures pricing formula, and determined the identities of buyers and sellers for various provinces according to a partitioning criterion, thereby revealing five trading markets. To ensure interprovincial cooperation, a rational allocation result for the benefits from this model was achieved using the Shapley value method to construct an optimal reduction program and to determine the optimal annual decisions for each province. Finally, the Beijing-Tianjin-Hebei region was used as a case study, as this region has recently experienced serious pollution. It was found that the model reduced the overall cost of reducing SO2 pollution. Moreover, each province can lower its cost for air pollution reduction, resulting in a win-win solution. Adopting the model would therefore enhance regional cooperation and promote the control of China's air pollution. The authors construct an interprovincial air pollution control model based on futures prices. The Shapley value method is used to rationally allocate the cooperation benefit. Interprovincial pollution control reduces the overall reduction cost of SO2. Each province can lower its cost for air pollution reduction by cooperation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
A model of optimal voluntary muscular control.
FitzHugh, R
1977-07-19
In the absence of detailed knowledge of how the CNS controls a muscle through its motor fibers, a reasonable hypothesis is that of optimal control. This hypothesis is studied using a simplified mathematical model of a single muscle, based on A.V. Hill's equations, with series elastic element omitted, and with the motor signal represented by a single input variable. Two cost functions were used. The first was total energy expended by the muscle (work plus heat). If the load is a constant force, with no inertia, Hill's optimal velocity of shortening results. If the load includes a mass, analysis by optimal control theory shows that the motor signal to the muscle consists of three phases: (1) maximal stimulation to accelerate the mass to the optimal velocity as quickly as possible, (2) an intermediate level of stimulation to hold the velocity at its optimal value, once reached, and (3) zero stimulation, to permit the mass to slow down, as quickly as possible, to zero velocity at the specified distance shortened. If the latter distance is too small, or the mass too large, the optimal velocity is not reached, and phase (2) is absent. For lengthening, there is no optimal velocity; there are only two phases, zero stimulation followed by maximal stimulation. The second cost function was total time. The optimal control for shortening consists of only phases (1) and (3) above, and is identical to the minimal energy control whenever phase (2) is absent from the latter. Generalization of this model to include viscous loads and a series elastic element are discussed.
A simulation-optimization model for water-resources management, Santa Barbara, California
Nishikawa, Tracy
1998-01-01
In times of drought, the local water supplies of the city of Santa Barbara, California, are insufficient to satisfy water demand. In response, the city has built a seawater desalination plant and gained access to imported water in 1997. Of primary concern to the city is delivering water from the various sources at a minimum cost while satisfying water demand and controlling seawater intrusion that might result from the overpumping of ground water. A simulation-optimization model has been developed for the optimal management of Santa Barbara?s water resources. The objective is to minimize the cost of water supply while satisfying various physical and institutional constraints such as meeting water demand, maintaining minimum hydraulic heads at selected sites, and not exceeding water-delivery or pumping capacities. The model is formulated as a linear programming problem with monthly management periods and a total planning horizon of 5 years. The decision variables are water deliveries from surface water (Gibraltar Reservoir, Cachuma Reservoir, Cachuma Reservoir cumulative annual carryover, Mission Tunnel, State Water Project, and desalinated seawater) and ground water (13 production wells). The state variables are hydraulic heads. Basic assumptions for all simulations are that (1) the cost of water varies with source but is fixed over time, and (2) only existing or planned city wells are considered; that is, the construction of new wells is not allowed. The drought of 1947?51 is Santa Barbara?s worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. Assumptions that were made for this base case include a head constraint equal to sea level at the coastal nodes; Cachuma Reservoir carryover of 3,000 acre-feet per year, with a maximum carryover of 8,277 acre-feet; a maximum annual demand of 15,000 acre-feet; and average monthly capacities for the Cachuma and the Gibraltar Reservoirs. The base-case results indicate that water demands can be met, with little water required from the most expensive water source (desalinated seawater), at a total cost of $5.56 million over the 5-year planning horizon. The simulation model has drains, which operate as nonlinear functions of heads and could affect the model solutions. However, numerical tests show that the drains have little effect on the optimal solution. Sensitivity analyses on the base case yield the following results: If allowable Cachuma Reservoir carryover is decreased by about 50 percent, then costs increase by about 14 percent; if the peak demand is decreased by 7 percent, then costs will decrease by about 14 percent; if the head constraints are loosened to -30 feet, then the costs decrease by about 18 percent; if the heads are constrained such that a zero hydraulic gradient condition occurs at the ocean boundary, then the optimization problem does not have a solution; if the capacity of the desalination plant is constrained to zero acre-feet, then the cost increases by about 2 percent; and if the carryover of State Water Project water is implemented, then the cost decreases by about 0.5 percent. Four additional monthly diversion distribution scenarios for the reservoirs were tested: average monthly Cachuma Reservoir deliveries with the actual (scenario 1) and proposed (scenario 2) monthly distributions of Gibraltar Reservoir water, and variable monthly Cachuma Reservoir deliveries with the actual (scenario 3) and proposed (scenario 4) monthly distributions of Gibraltar Reservoir water. Scenario 1 resulted in a total cost of about $7.55 million, scenario 2 resulted in a total cost of about $5.07 million, and scenarios 3 and 4 resulted in a total cost of about $4.53 million. Sensitivities of the scenarios 1 and 2 to desalination-plant capacity and State Water Project water carryover were tested. The scenario 1 sensitivity analysis indicated that incorpo
An intelligent emissions controller for fuel lean gas reburn in coal-fired power plants.
Reifman, J; Feldman, E E; Wei, T Y; Glickert, R W
2000-02-01
The application of artificial intelligence techniques for performance optimization of the fuel lean gas reburn (FLGR) system is investigated. A multilayer, feedforward artificial neural network is applied to model static nonlinear relationships between the distribution of injected natural gas into the upper region of the furnace of a coal-fired boiler and the corresponding oxides of nitrogen (NOx) emissions exiting the furnace. Based on this model, optimal distributions of injected gas are determined such that the largest NOx reduction is achieved for each value of total injected gas. This optimization is accomplished through the development of a new optimization method based on neural networks. This new optimal control algorithm, which can be used as an alternative generic tool for solving multidimensional nonlinear constrained optimization problems, is described and its results are successfully validated against an off-the-shelf tool for solving mathematical programming problems. Encouraging results obtained using plant data from one of Commonwealth Edison's coal-fired electric power plants demonstrate the feasibility of the overall approach. Preliminary results show that the use of this intelligent controller will also enable the determination of the most cost-effective operating conditions of the FLGR system by considering, along with the optimal distribution of the injected gas, the cost differential between natural gas and coal and the open-market price of NOx emission credits. Further study, however, is necessary, including the construction of a more comprehensive database, needed to develop high-fidelity process models and to add carbon monoxide (CO) emissions to the model of the gas reburn system.
Cost-effectiveness on a local level: whether and when to adopt a new technology.
Woertman, Willem H; Van De Wetering, Gijs; Adang, Eddy M M
2014-04-01
Cost-effectiveness analysis has become a widely accepted tool for decision making in health care. The standard textbook cost-effectiveness analysis focuses on whether to make the switch from an old or common practice technology to an innovative technology, and in doing so, it takes a global perspective. In this article, we are interested in a local perspective, and we look at the questions of whether and when the switch from old to new should be made. A new approach to cost-effectiveness from a local (e.g., a hospital) perspective, by means of a mathematical model for cost-effectiveness that explicitly incorporates time, is proposed. A decision rule is derived for establishing whether a new technology should be adopted, as well as a general rule for establishing when it pays to postpone adoption by 1 more period, and a set of decision rules that can be used to determine the optimal timing of adoption. Finally, a simple example is presented to illustrate our model and how it leads to optimal decision making in a number of cases.
The HDAC Inhibitor TSA Ameliorates a Zebrafish Model of Duchenne Muscular Dystrophy.
Johnson, Nathan M; Farr, Gist H; Maves, Lisa
2013-09-17
Zebrafish are an excellent model for Duchenne muscular dystrophy. In particular, zebrafish provide a system for rapid, easy, and low-cost screening of small molecules that can ameliorate muscle damage in dystrophic larvae. Here we identify an optimal anti-sense morpholino cocktail that robustly knocks down zebrafish Dystrophin (dmd-MO). We use two approaches, muscle birefringence and muscle actin expression, to quantify muscle damage and show that the dmd-MO dystrophic phenotype closely resembles the zebrafish dmd mutant phenotype. We then show that the histone deacetylase (HDAC) inhibitor TSA, which has been shown to ameliorate the mdx mouse Duchenne model, can rescue muscle fiber damage in both dmd-MO and dmd mutant larvae. Our study identifies optimal morpholino and phenotypic scoring approaches for dystrophic zebrafish, further enhancing the zebrafish dmd model for rapid and cost-effective small molecule screening.
NASA Astrophysics Data System (ADS)
Ortiz-Matos, L.; Aguila-Tellez, A.; Hincapié-Reyes, R. C.; González-Sanchez, J. W.
2017-07-01
In order to design electrification systems, recent mathematical models solve the problem of location, type of electrification components, and the design of possible distribution microgrids. However, due to the amount of points to be electrified increases, the solution to these models require high computational times, thereby becoming unviable practice models. This study posed a new heuristic method for the electrification of rural areas in order to solve the problem. This heuristic algorithm presents the deployment of rural electrification microgrids in the world, by finding routes for optimal placement lines and transformers in transmission and distribution microgrids. The challenge is to obtain a display with equity in losses, considering the capacity constraints of the devices and topology of the land at minimal economic cost. An optimal scenario ensures the electrification of all neighbourhoods to a minimum investment cost in terms of the distance between electric conductors and the amount of transformation devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
On the optimal sizing of batteries for electric vehicles and the influence of fast charge
NASA Astrophysics Data System (ADS)
Verbrugge, Mark W.; Wampler, Charles W.
2018-04-01
We provide a brief summary of advanced battery technologies and a framework (i.e., a simple model) for assessing electric-vehicle (EV) architectures and associated costs to the customer. The end result is a qualitative model that can be used to calculate the optimal EV range (which maps back to the battery size and performance), including the influence of fast charge. We are seeing two technological pathways emerging: fast-charge-capable batteries versus batteries with much higher energy densities (and specific energies) but without the capability to fast charge. How do we compare and contrast the two alternatives? This work seeks to shed light on the question. We consider costs associated with the cells, added mass due to the use of larger batteries, and charging, three factors common in such analyses. In addition, we consider a new cost input, namely, the cost of adaption, corresponding to the days a customer would need an alternative form of transportation, as the EV would not have sufficient range on those days.
Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.
Guo, Tianjiao; Englehardt, James D; Fallon, Howard J
While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...
2017-06-06
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
Optimized bioregenerative space diet selection with crew choice
NASA Technical Reports Server (NTRS)
Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean
2003-01-01
Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.
Gregório, João; Russo, Giuliano; Lapão, Luís Velez
2016-01-01
The current financial crisis is pressing health systems to reduce costs while looking to improve service standards. In this context, the necessity to optimize health care systems management has become an imperative. However, little research has been conducted on health care and pharmaceutical services cost management. Pharmaceutical services optimization requires a comprehensive understanding of resources usage and its costs. This study explores the development of a time-driven activity-based costing (TDABC) model, with the objective of calculating the cost of pharmaceutical services to help inform policy-making. Pharmaceutical services supply patterns were studied in three pharmacies during a weekday through an observational study. Details of each activity's execution were recorded, including time spent per activity performed by pharmacists. Data on pharmacy costs was obtained through pharmacies' accounting records. The calculated cost of a dispensing service in these pharmacies ranged from €3.16 to €4.29. The cost of a counseling service when no medicine was supplied ranged from €1.24 to €1.46. The cost of health screening services ranged from €2.86 to €4.55. The presented TDABC model gives us new insights on management and costs of community pharmacies. This study shows the importance of cost analysis for health care services, specifically on pharmaceutical services, in order to better inform pharmacies' management and the elaboration of pharmaceutical policies. Copyright © 2016 Elsevier Inc. All rights reserved.
Constraint Optimization Problem For The Cutting Of A Cobalt Chrome Refractory Material
NASA Astrophysics Data System (ADS)
Lebaal, Nadhir; Schlegel, Daniel; Folea, Milena
2011-05-01
This paper shows a complete approach to solve a given problem, from the experimentation to the optimization of different cutting parameters. In response to an industrial problem of slotting FSX 414, a Cobalt-based refractory material, we have implemented a design of experiment to determine the most influent parameters on the tool life, the surface roughness and the cutting forces. After theses trials, an optimization approach has been implemented to find the lowest manufacturing cost while respecting the roughness constraints and cutting force limitation constraints. The optimization approach is based on the Response Surface Method (RSM) using the Sequential Quadratic programming algorithm (SQP) for a constrained problem. To avoid a local optimum and to obtain an accurate solution at low cost, an efficient strategy, which allows improving the RSM accuracy in the vicinity of the global optimum, is presented. With these models and these trials, we could apply and compare our optimization methods in order to get the lowest cost for the best quality, i.e. a satisfying surface roughness and limited cutting forces.
Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua
2018-02-01
High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Peralta, Richard C.; Forghani, Ali; Fayad, Hala
2014-04-01
Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.
Multi-objective possibilistic model for portfolio selection with transaction cost
NASA Astrophysics Data System (ADS)
Jana, P.; Roy, T. K.; Mazumder, S. K.
2009-06-01
In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.
Ekwunife, Obinna I.
2017-01-01
Background Diarrhoea is a leading cause of death in Nigerian children under 5 years. Implementing the most cost-effective approach to diarrhoea management in Nigeria will help optimize health care resources allocation. This study evaluated the cost-effectiveness of various approaches to diarrhoea management namely: the ‘no treatment’ approach (NT); the preventive approach with rotavirus vaccine; the integrated management of childhood illness for diarrhoea approach (IMCI); and rotavirus vaccine plus integrated management of childhood illness for diarrhoea approach (rotavirus vaccine + IMCI). Methods Markov cohort model conducted from the payer’s perspective was used to calculate the cost-effectiveness of the four interventions. The markov model simulated a life cycle of 260 weeks for 33 million children under five years at risk of having diarrhoea (well state). Disability adjusted life years (DALYs) averted was used to quantify clinical outcome. Incremental cost-effectiveness ratio (ICER) served as measure of cost-effectiveness. Results Based on cost-effectiveness threshold of $2,177.99 (i.e. representing Nigerian GDP/capita), all the approaches were very cost-effective but rotavirus vaccine approach was dominated. While IMCI has the lowest ICER of $4.6/DALY averted, the addition of rotavirus vaccine was cost-effective with an ICER of $80.1/DALY averted. Rotavirus vaccine alone was less efficient in optimizing health care resource allocation. Conclusion Rotavirus vaccine + IMCI approach was the most cost-effective approach to childhood diarrhoea management. Its awareness and practice should be promoted in Nigeria. Addition of rotavirus vaccine should be considered for inclusion in the national programme of immunization. Although our findings suggest that addition of rotavirus vaccine to IMCI for diarrhoea is cost-effective, there may be need for further vaccine demonstration studies or real life studies to establish the cost-effectiveness of the vaccine in Nigeria. PMID:29261649
A multi-period optimization model for energy planning with CO(2) emission consideration.
Mirzaesmaeeli, H; Elkamel, A; Douglas, P L; Croiset, E; Gupta, M
2010-05-01
A novel deterministic multi-period mixed-integer linear programming (MILP) model for the power generation planning of electric systems is described and evaluated in this paper. The model is developed with the objective of determining the optimal mix of energy supply sources and pollutant mitigation options that meet a specified electricity demand and CO(2) emission targets at minimum cost. Several time-dependent parameters are included in the model formulation; they include forecasted energy demand, fuel price variability, construction lead time, conservation initiatives, and increase in fixed operational and maintenance costs over time. The developed model is applied to two case studies. The objective of the case studies is to examine the economical, structural, and environmental effects that would result if the electricity sector was required to reduce its CO(2) emissions to a specified limit. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bensaida, K.; Alie, Colin; Elkamel, A.; Almansoori, A.
2017-08-01
This paper presents a novel techno-economic optimization model for assessing the effectiveness of CO2 mitigation options for the electricity generation sub-sector that includes renewable energy generation. The optimization problem was formulated as a MINLP model using the GAMS modeling system. The model seeks the minimization of the power generation costs under CO2 emission constraints by dispatching power from low CO2 emission-intensity units. The model considers the detailed operation of the electricity system to effectively assess the performance of GHG mitigation strategies and integrates load balancing, carbon capture and carbon taxes as methods for reducing CO2 emissions. Two case studies are discussed to analyze the benefits and challenges of the CO2 reduction methods in the electricity system. The proposed mitigations options would not only benefit the environment, but they will as well improve the marginal cost of producing energy which represents an advantage for stakeholders.
Walsh, Colin G; Sharman, Kavya; Hripcsak, George
2017-12-01
Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jonrinaldi, Hadiguna, Rika Ampuh; Salastino, Rades
2017-11-01
Environmental consciousness has paid many attention nowadays. It is not only about how to recycle, remanufacture or reuse used end products but it is also how to optimize the operations of the reverse system. A previous research has proposed a design of reverse supply chain of biodiesel network from used cooking oil. However, the research focused on the design of the supply chain strategy not the operations of the supply chain. It only decided how to design the structure of the supply chain in the next few years, and the process of each stage will be conducted in the supply chain system in general. The supply chain system has not considered operational policies to be conducted by the companies in the supply chain. Companies need a policy for each stage of the supply chain operations to be conducted so as to produce the optimal supply chain system, including how to use all the resources that have been designed in order to achieve the objectives of the supply chain system. Therefore, this paper proposes a model to optimize the operational planning of a biodiesel supply chain network from used cooking oil. A mixed integer linear programming is developed to model the operational planning of biodiesel supply chain in order to minimize the total operational cost of the supply chain. Based on the implementation of the model developed, the total operational cost of the biodiesel supply chain incurred by the system is less than the total operational cost of supply chain based on the previous research during seven days of operational planning about amount of 2,743,470.00 or 0.186%. Production costs contributed to 74.6 % of total operational cost and the cost of purchasing the used cooking oil contributed to 24.1 % of total operational cost. So, the system should pay more attention to these two aspects as changes in the value of these aspects will cause significant effects to the change in the total operational cost of the supply chain.
NASA Astrophysics Data System (ADS)
Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.
2015-05-01
Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.
NASA Technical Reports Server (NTRS)
Metschan, Stephen L.; Wilden, Kurtis S.; Sharpless, Garrett C.; Andelman, Rich M.
1993-01-01
Textile manufacturing processes offer potential cost and weight advantages over traditional composite materials and processes for transport fuselage elements. In the current study, design cost modeling relationships between textile processes and element design details were developed. Such relationships are expected to help future aircraft designers to make timely decisions on the effect of design details and overall configurations on textile fabrication costs. The fundamental advantage of a design cost model is to insure that the element design is cost effective for the intended process. Trade studies on the effects of processing parameters also help to optimize the manufacturing steps for a particular structural element. Two methods of analyzing design detail/process cost relationships developed for the design cost model were pursued in the current study. The first makes use of existing databases and alternative cost modeling methods (e.g. detailed estimating). The second compares design cost model predictions with data collected during the fabrication of seven foot circumferential frames for ATCAS crown test panels. The process used in this case involves 2D dry braiding and resin transfer molding of curved 'J' cross section frame members having design details characteristic of the baseline ATCAS crown design.
Modified optimal control pilot model for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Davidson, John B.; Schmidt, David K.
1992-01-01
This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.
Computer-Aided Communication Satellite System Analysis and Optimization.
ERIC Educational Resources Information Center
Stagl, Thomas W.; And Others
Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…
Microfiltration of thin stillage: Process simulation and economic analyses
USDA-ARS?s Scientific Manuscript database
In plant scale operations, multistage membrane systems have been adopted for cost minimization. We considered design optimization and operation of a continuous microfiltration (MF) system for the corn dry grind process. The objectives were to develop a model to simulate a multistage MF system, optim...
Approaches to eliminate waste and reduce cost for recycling glass.
Chao, Chien-Wen; Liao, Ching-Jong
2011-12-01
In recent years, the issue of environmental protection has received considerable attention. This paper adds to the literature by investigating a scheduling problem in the manufacturing of a glass recycling factory in Taiwan. The objective is to minimize the sum of the total holding cost and loss cost. We first represent the problem as an integer programming (IP) model, and then develop two heuristics based on the IP model to find near-optimal solutions for the problem. To validate the proposed heuristics, comparisons between optimal solutions from the IP model and solutions from the current method are conducted. The comparisons involve two problem sizes, small and large, where the small problems range from 15 to 45 jobs, and the large problems from 50 to 100 jobs. Finally, a genetic algorithm is applied to evaluate the proposed heuristics. Computational experiments show that the proposed heuristics can find good solutions in a reasonable time for the considered problem. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Eder, D.
1992-01-01
Parametric models were constructed for Earth-based laser powered electric orbit transfer from low Earth orbit to geosynchronous orbit. These models were used to carry out performance, cost/benefit, and sensitivity analyses of laser-powered transfer systems including end-to-end life cycle cost analyses for complete systems. Comparisons with conventional orbit transfer systems were made indicating large potential cost savings for laser-powered transfer. Approximate optimization was done to determine best parameter values for the systems. Orbit transfer flights simulations were conducted to explore effects of parameters not practical to model with a spreadsheet. The simulations considered view factors that determine when power can be transferred from ground stations to an orbit transfer vehicle and conducted sensitivity analyses for numbers of ground stations, Isp including dual-Isp transfers, and plane change profiles. Optimal steering laws were used for simultaneous altitude and plane change. Viewing geometry and low-thrust orbit raising were simultaneously simulated. A very preliminary investigation of relay mirrors was made.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonkman, Jason; Annoni, Jennifer; Hayman, Greg
This paper presents the development of FAST.Farm, a new multiphysics tool applicable to engineering problems in research and industry involving wind farm performance and cost optimization that is needed to address the current underperformance, failures, and expenses plaguing the wind industry. Achieving wind cost-of-energy targets - which requires improvements in wind farm performance and reliability, together with reduced uncertainty and expenditures - has been eluded by the complicated nature of the wind farm design problem, especially the sophisticated interaction between atmospheric phenomena and wake dynamics and array effects. FAST.Farm aims to balance the need for accurate modeling of the relevantmore » physics for predicting power performance and loads while maintaining low computational cost to support a highly iterative and probabilistic design process and system-wide optimization. FAST.Farm makes use of FAST to model the aero-hydro-servo-elastics of distinct turbines in the wind farm, and it is based on some of the principles of the Dynamic Wake Meandering (DWM) model, but avoids many of the limitations of existing DWM implementations.« less
HOMER: The Micropower Optimization Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2004-03-01
HOMER, the micropower optimization model, helps users to design micropower systems for off-grid and grid-connected power applications. HOMER models micropower systems with one or more power sources including wind turbines, photovoltaics, biomass power, hydropower, cogeneration, diesel engines, cogeneration, batteries, fuel cells, and electrolyzers. Users can explore a range of design questions such as which technologies are most effective, what size should components be, how project economics are affected by changes in loads or costs, and is the renewable resource adequate.
2014-04-30
cost to acquire systems as design maturity could be verified incrementally as the system was developed vice waiting for specific large “ big bang ...Framework (MBAF) be applied to simulate or optimize process variations on programs? LSI Roles and Responsibilities A review of the roles and...the model/process optimization process. It is the current intent that NAVAIR will use the model to run simulations on process changes in an attempt to
Simultaneous prediction of muscle and contact forces in the knee during gait.
Lin, Yi-Chung; Walter, Jonathan P; Banks, Scott A; Pandy, Marcus G; Fregly, Benjamin J
2010-03-22
Musculoskeletal models are currently the primary means for estimating in vivo muscle and contact forces in the knee during gait. These models typically couple a dynamic skeletal model with individual muscle models but rarely include articular contact models due to their high computational cost. This study evaluates a novel method for predicting muscle and contact forces simultaneously in the knee during gait. The method utilizes a 12 degree-of-freedom knee model (femur, tibia, and patella) combining muscle, articular contact, and dynamic skeletal models. Eight static optimization problems were formulated using two cost functions (one based on muscle activations and one based on contact forces) and four constraints sets (each composed of different combinations of inverse dynamic loads). The estimated muscle and contact forces were evaluated using in vivo tibial contact force data collected from a patient with a force-measuring knee implant. When the eight optimization problems were solved with added constraints to match the in vivo contact force measurements, root-mean-square errors in predicted contact forces were less than 10 N. Furthermore, muscle and patellar contact forces predicted by the two cost functions became more similar as more inverse dynamic loads were used as constraints. When the contact force constraints were removed, estimated medial contact forces were similar and lateral contact forces lower in magnitude compared to measured contact forces, with estimated muscle forces being sensitive and estimated patellar contact forces relatively insensitive to the choice of cost function and constraint set. These results suggest that optimization problem formulation coupled with knee model complexity can significantly affect predicted muscle and contact forces in the knee during gait. Further research using a complete lower limb model is needed to assess the importance of this finding to the muscle and contact force estimation process. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon
2016-03-01
In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.
Optimizing the response to surveillance alerts in automated surveillance systems.
Izadi, Masoumeh; Buckeridge, David L
2011-02-28
Although much research effort has been directed toward refining algorithms for disease outbreak alerting, considerably less attention has been given to the response to alerts generated from statistical detection algorithms. Given the inherent inaccuracy in alerting, it is imperative to develop methods that help public health personnel identify optimal policies in response to alerts. This study evaluates the application of dynamic decision making models to the problem of responding to outbreak detection methods, using anthrax surveillance as an example. Adaptive optimization through approximate dynamic programming is used to generate a policy for decision making following outbreak detection. We investigate the degree to which the model can tolerate noise theoretically, in order to keep near optimal behavior. We also evaluate the policy from our model empirically and compare it with current approaches in routine public health practice for investigating alerts. Timeliness of outbreak confirmation and total costs associated with the decisions made are used as performance measures. Using our approach, on average, 80 per cent of outbreaks were confirmed prior to the fifth day of post-attack with considerably less cost compared to response strategies currently in use. Experimental results are also provided to illustrate the robustness of the adaptive optimization approach and to show the realization of the derived error bounds in practice. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
Management of a stage-structured insect pest: an application of approximate optimization.
Hackett, Sean C; Bonsall, Michael B
2018-06-01
Ecological decision problems frequently require the optimization of a sequence of actions over time where actions may have both immediate and downstream effects. Dynamic programming can solve such problems only if the dimensionality is sufficiently low. Approximate dynamic programming (ADP) provides a suite of methods applicable to problems of arbitrary complexity at the expense of guaranteed optimality. The most easily generalized method is the look-ahead policy: a brute-force algorithm that identifies reasonable actions by constructing and solving a series of temporally truncated approximations of the full problem over a defined planning horizon. We develop and apply this approach to a pest management problem inspired by the Mediterranean fruit fly, Ceratitis capitata. The model aims to minimize the cumulative costs of management actions and medfly-induced losses over a single 16-week season. The medfly population is stage-structured and grows continuously while management decisions are made at discrete, weekly intervals. For each week, the model chooses between inaction, insecticide application, or one of six sterile insect release ratios. Look-ahead policy performance is evaluated over a range of planning horizons, two levels of crop susceptibility to medfly and three levels of pesticide persistence. In all cases, the actions proposed by the look-ahead policy are contrasted to those of a myopic policy that minimizes costs over only the current week. We find that look-ahead policies always out-performed a myopic policy and decision quality is sensitive to the temporal distribution of costs relative to the planning horizon: it is beneficial to extend the planning horizon when it excludes pertinent costs. However, longer planning horizons may reduce decision quality when major costs are resolved imminently. ADP methods such as the look-ahead-policy-based approach developed here render questions intractable to dynamic programming amenable to inference but should be applied carefully as their flexibility comes at the expense of guaranteed optimality. However, given the complexity of many ecological management problems, the capacity to propose a strategy that is "good enough" using a more representative problem formulation may be preferable to an optimal strategy derived from a simplified model. © 2018 by the Ecological Society of America.
Optimum Tolerance Design Using Component-Amount and Mixture-Amount Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Ozler, Cenk; Sehirlioglu, Ali Kemal
2013-08-01
One type of tolerance design problem involves optimizing component and assembly tolerances to minimize the total cost (sum of manufacturing cost and quality loss). Previous literature recommended using traditional response surface (RS) designs and models to solve this type of tolerance design problem. In this article, component-amount (CA) and mixture-amount (MA) approaches are proposed as more appropriate for solving this type of tolerance design problem. The advantages of the CA and MA approaches over the RS approach are discussed. Reasons for choosing between the CA and MA approaches are also discussed. The CA and MA approaches (experimental design, response modeling,more » and optimization) are illustrated using real examples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langrish, T.A.G.; Harvey, A.C.
2000-01-01
A model of a well-mixed fluidized-bed dryer within a process flowsheeting package (SPEEDUP{trademark}) has been developed and applied to a parameter sensitivity study, a steady-state controllability analysis and an optimization study. This approach is more general and would be more easily applied to a complex flowsheet than one which relied on stand-alone dryer modeling packages. The simulation has shown that industrial data may be fitted to the model outputs with sensible values of unknown parameters. For this case study, the parameter sensitivity study has found that the heat loss from the dryer and the critical moisture content of the materialmore » have the greatest impact on the dryer operation at the current operating point. An optimization study has demonstrated the dominant effect of the heat loss from the dryer on the current operating cost and the current operating conditions, and substantial cost savings (around 50%) could be achieved with a well-insulated and airtight dryer, for the specific case studied here.« less
Analysis of Seasonal Chlorophyll-a Using An Adjoint Three-Dimensional Ocean Carbon Cycle Model
NASA Astrophysics Data System (ADS)
Tjiputra, J.; Winguth, A.; Polzin, D.
2004-12-01
The misfit between numerical ocean model and observations can be reduced using data assimilation. This can be achieved by optimizing the model parameter values using adjoint model. The adjoint model minimizes the model-data misfit by estimating the sensitivity or gradient of the cost function with respect to initial condition, boundary condition, or parameters. The adjoint technique was used to assimilate seasonal chlorophyll-a data from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite to a marine biogeochemical model HAMOCC5.1. An Identical Twin Experiment (ITE) was conducted to test the robustness of the model and the non-linearity level of the forward model. The ITE experiment successfully recovered most of the perturbed parameter to their initial values, and identified the most sensitive ecosystem parameters, which contribute significantly to model-data bias. The regional assimilations of SeaWiFS chlorophyll-a data into the model were able to reduce the model-data misfit (i.e. the cost function) significantly. The cost function reduction mostly occurred in the high latitudes (e.g. the model-data misfit in the northern region during summer season was reduced by 54%). On the other hand, the equatorial regions appear to be relatively stable with no strong reduction in cost function. The optimized parameter set is used to forecast the carbon fluxes between marine ecosystem compartments (e.g. Phytoplankton, Zooplankton, Nutrients, Particulate Organic Carbon, and Dissolved Organic Carbon). The a posteriori model run using the regional best-fit parameterization yields approximately 36 PgC/yr of global net primary productions in the euphotic zone.
An Efficient Framework Model for Optimizing Routing Performance in VANETs.
Al-Kharasani, Nori M; Zulkarnain, Zuriati Ahmad; Subramaniam, Shamala; Hanapi, Zurina Mohd
2018-02-15
Routing in Vehicular Ad hoc Networks (VANET) is a bit complicated because of the nature of the high dynamic mobility. The efficiency of routing protocol is influenced by a number of factors such as network density, bandwidth constraints, traffic load, and mobility patterns resulting in frequency changes in network topology. Therefore, Quality of Service (QoS) is strongly needed to enhance the capability of the routing protocol and improve the overall network performance. In this paper, we introduce a statistical framework model to address the problem of optimizing routing configuration parameters in Vehicle-to-Vehicle (V2V) communication. Our framework solution is based on the utilization of the network resources to further reflect the current state of the network and to balance the trade-off between frequent changes in network topology and the QoS requirements. It consists of three stages: simulation network stage used to execute different urban scenarios, the function stage used as a competitive approach to aggregate the weighted cost of the factors in a single value, and optimization stage used to evaluate the communication cost and to obtain the optimal configuration based on the competitive cost. The simulation results show significant performance improvement in terms of the Packet Delivery Ratio (PDR), Normalized Routing Load (NRL), Packet loss (PL), and End-to-End Delay (E2ED).
NASA Astrophysics Data System (ADS)
Zheng, Yingying
The growing energy demands and needs for reducing carbon emissions call more and more attention to the development of renewable energy technologies and management strategies. Microgrids have been developed around the world as a means to address the high penetration level of renewable generation and reduce greenhouse gas emissions while attempting to address supply-demand balancing at a more local level. This dissertation presents a model developed to optimize the design of a biomass-integrated renewable energy microgrid employing combined heat and power with energy storage. A receding horizon optimization with Monte Carlo simulation were used to evaluate optimal microgrid design and dispatch under uncertainties in the renewable energy and utility grid energy supplies, the energy demands, and the economic assumptions so as to generate a probability density function for the cost of energy. Case studies were examined for a conceptual utility grid-connected microgrid application in Davis, California. The results provide the most cost effective design based on the assumed energy load profile, local climate data, utility tariff structure, and technical and financial performance of the various components of the microgrid. Sensitivity and uncertainty analyses are carried out to illuminate the key parameters that influence the energy costs. The model application provides a means to determine major risk factors associated with alternative design integration and operating strategies.
Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.
Patri, Jean-François; Diard, Julien; Perrier, Pascal
2015-12-01
The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.
Optimization of Large-Scale Daily Hydrothermal System Operations With Multiple Objectives
NASA Astrophysics Data System (ADS)
Wang, Jian; Cheng, Chuntian; Shen, Jianjian; Cao, Rui; Yeh, William W.-G.
2018-04-01
This paper proposes a practical procedure for optimizing the daily operation of a large-scale hydrothermal system. The overall procedure optimizes a monthly model over a period of 1 year and a daily model over a period of up to 1 month. The outputs from the monthly model are used as inputs and boundary conditions for the daily model. The models iterate and update when new information becomes available. The monthly hydrothermal model uses nonlinear programing (NLP) to minimize fuel costs, while maximizing hydropower production. The daily model consists of a hydro model, a thermal model, and a combined hydrothermal model. The hydro model and thermal model generate the initial feasible solutions for the hydrothermal model. The two competing objectives considered in the daily hydrothermal model are minimizing fuel costs and minimizing thermal emissions. We use the constraint method to develop the trade-off curve (Pareto front) between these two objectives. We apply the proposed methodology on the Yunnan hydrothermal system in China. The system consists of 163 individual hydropower plants with an installed capacity of 48,477 MW and 11 individual thermal plants with an installed capacity of 12,400 MW. We use historical operational records to verify the correctness of the model and to test the robustness of the methodology. The results demonstrate the practicability and validity of the proposed procedure.
Simulation Propulsion System and Trajectory Optimization
NASA Technical Reports Server (NTRS)
Hendricks, Eric S.; Falck, Robert D.; Gray, Justin S.
2017-01-01
A number of new aircraft concepts have recently been proposed which tightly couple the propulsion system design and operation with the overall vehicle design and performance characteristics. These concepts include propulsion technology such as boundary layer ingestion, hybrid electric propulsion systems, distributed propulsion systems and variable cycle engines. Initial studies examining these concepts have typically used a traditional decoupled approach to aircraft design where the aerodynamics and propulsion designs are done a-priori and tabular data is used to provide inexpensive look ups to the trajectory ana-ysis. However the cost of generating the tabular data begins to grow exponentially when newer aircraft concepts require consideration of additional operational parameters such as multiple throttle settings, angle-of-attack effects on the propulsion system, or propulsion throttle setting effects on aerodynamics. This paper proposes a new modeling approach that eliminated the need to generate tabular data, instead allowing an expensive propulsion or aerodynamic analysis to be directly integrated into the trajectory analysis model and the entire design problem optimized in a fully coupled manner. The new method is demonstrated by implementing a canonical optimal control problem, the F-4 minimum time-to-climb trajectory optimization using three relatively new analysis tools: Open M-DAO, PyCycle and Pointer. Pycycle and Pointer both provide analytic derivatives and Open MDAO enables the two tools to be combined into a coupled model that can be run in an efficient parallel manner that helps to cost the increased cost of the more expensive propulsion analysis. Results generated with this model serve as a validation of the tightly coupled design method and guide future studies to examine aircraft concepts with more complex operational dependencies for the aerodynamic and propulsion models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reber, Timothy J; Chartan, Erol Kevin; Brinkman, Gregory L
Wind and solar power contract prices have recently become cheaper than many conventional new-build alternatives in South Africa and trends suggest a continued increase in the share of variable renewable energy (vRE) on South Africa's power system with coal technology seeing the greatest reduction in capacity, see 'Figure 6: Percentage share by Installed Capacity (MW)' in [1]. Hence it is essential to perform a state-of-the-art grid integration study examining the effects of these high penetrations of vRE on South Africa's power system. Under the 21st Century Power Partnership (21CPP), funded by the U.S. Department of Energy, the National Renewable Energymore » Laboratory (NREL) has significantly augmented existing models of the South African power system to investigate future vRE scenarios. NREL, in collaboration with Eskom's Planning Department, further developed, tested and ran a combined capacity expansion and operational model of the South African power system including spatially disaggregated detail and geographical representation of system resources. New software to visualize and interpret modelling outputs has been developed, and scenario analysis of stepwise vRE build targets reveals new insight into associated planning and operational impacts and costs. The model, built using PLEXOS, is split into two components, firstly a capacity expansion model and secondly a unit commitment and economic dispatch model. The capacity expansion model optimizes new generation decisions to achieve the lowest cost, with a full understanding of capital cost and an approximated understanding of operational costs. The operational model has a greater set of detailed operational constraints and is run at daily resolutions. Both are run from 2017 through 2050. This investigation suggests that running both models in tandem may be the most effective means to plan the least cost South African power system as build plans seen to be more expensive than optimal by the capacity expansion model can produce greater operational cost savings seen only in the operational model.« less
Li, Zheng; Qi, Rong; Wang, Bo; Zou, Zhe; Wei, Guohong; Yang, Min
2013-01-01
A full-scale oxidation ditch process for treating sewage was simulated with the ASM2d model and optimized for minimal cost with acceptable performance in terms of ammonium and phosphorus removal. A unified index was introduced by integrating operational costs (aeration energy and sludge production) with effluent violations for performance evaluation. Scenario analysis showed that, in comparison with the baseline (all of the 9 aerators activated), the strategy of activating 5 aerators could save aeration energy significantly with an ammonium violation below 10%. Sludge discharge scenario analysis showed that a sludge discharge flow of 250-300 m3/day (solid retention time (SRT), 13-15 days) was appropriate for the enhancement of phosphorus removal without excessive sludge production. The proposed optimal control strategy was: activating 5 rotating disks operated with a mode of "111100100" ("1" represents activation and "0" represents inactivation) for aeration and sludge discharge flow of 200 m3/day (SRT, 19 days). Compared with the baseline, this strategy could achieve ammonium violation below 10% and TP violation below 30% with substantial reduction of aeration energy cost (46%) and minimal increment of sludge production (< 2%). This study provides a useful approach for the optimization of process operation and control.
Data-driven modeling of solar-powered urban microgrids
Halu, Arda; Scala, Antonio; Khiyami, Abdulaziz; González, Marta C.
2016-01-01
Distributed generation takes center stage in today’s rapidly changing energy landscape. Particularly, locally matching demand and generation in the form of microgrids is becoming a promising alternative to the central distribution paradigm. Infrastructure networks have long been a major focus of complex networks research with their spatial considerations. We present a systemic study of solar-powered microgrids in the urban context, obeying real hourly consumption patterns and spatial constraints of the city. We propose a microgrid model and study its citywide implementation, identifying the self-sufficiency and temporal properties of microgrids. Using a simple optimization scheme, we find microgrid configurations that result in increased resilience under cost constraints. We characterize load-related failures solving power flows in the networks, and we show the robustness behavior of urban microgrids with respect to optimization using percolation methods. Our findings hint at the existence of an optimal balance between cost and robustness in urban microgrids. PMID:26824071
Data-driven modeling of solar-powered urban microgrids.
Halu, Arda; Scala, Antonio; Khiyami, Abdulaziz; González, Marta C
2016-01-01
Distributed generation takes center stage in today's rapidly changing energy landscape. Particularly, locally matching demand and generation in the form of microgrids is becoming a promising alternative to the central distribution paradigm. Infrastructure networks have long been a major focus of complex networks research with their spatial considerations. We present a systemic study of solar-powered microgrids in the urban context, obeying real hourly consumption patterns and spatial constraints of the city. We propose a microgrid model and study its citywide implementation, identifying the self-sufficiency and temporal properties of microgrids. Using a simple optimization scheme, we find microgrid configurations that result in increased resilience under cost constraints. We characterize load-related failures solving power flows in the networks, and we show the robustness behavior of urban microgrids with respect to optimization using percolation methods. Our findings hint at the existence of an optimal balance between cost and robustness in urban microgrids.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Optimizing bulk milk dioxin monitoring based on costs and effectiveness.
Lascano-Alcoser, V H; Velthuis, A G J; van der Fels-Klerx, H J; Hoogenboom, L A P; Oude Lansink, A G J M
2013-07-01
Dioxins are environmental pollutants, potentially present in milk products, which have negative consequences for human health and for the firms and farms involved in the dairy chain. Dioxin monitoring in feed and food has been implemented to detect their presence and estimate their levels in food chains. However, the costs and effectiveness of such programs have not been evaluated. In this study, the costs and effectiveness of bulk milk dioxin monitoring in milk trucks were estimated to optimize the sampling and pooling monitoring strategies aimed at detecting at least 1 contaminated dairy farm out of 20,000 at a target dioxin concentration level. Incidents of different proportions, in terms of the number of contaminated farms, and concentrations were simulated. A combined testing strategy, consisting of screening and confirmatory methods, was assumed as well as testing of pooled samples. Two optimization models were built using linear programming. The first model aimed to minimize monitoring costs subject to a minimum required effectiveness of finding an incident, whereas the second model aimed to maximize the effectiveness for a given monitoring budget. Our results show that a high level of effectiveness is possible, but at high costs. Given specific assumptions, monitoring with 95% effectiveness to detect an incident of 1 contaminated farm at a dioxin concentration of 2 pg of toxic equivalents/g of fat [European Commission's (EC) action level] costs €2.6 million per month. At the same level of effectiveness, a 73% cost reduction is possible when aiming to detect an incident where 2 farms are contaminated at a dioxin concentration of 3 pg of toxic equivalents/g of fat (EC maximum level). With a fixed budget of €40,000 per month, the probability of detecting an incident with a single contaminated farm at a dioxin concentration equal to the EC action level is 4.4%. This probability almost doubled (8.0%) when aiming to detect the same incident but with a dioxin concentration equal to the EC maximum level. This study shows that the effectiveness of finding an incident depends not only on the ratio at which, for testing, collected truck samples are mixed into a pooled sample (aiming at detecting certain concentration), but also the number of collected truck samples. In conclusion, the optimal cost-effective monitoring depends on the number of contaminated farms and the concentration aimed at detection. The models and study results offer quantitative support to risk managers of food industries and food safety authorities. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DuPont, Bryony; Cagan, Jonathan; Moriarty, Patrick
This paper presents a system of modeling advances that can be applied in the computational optimization of wind plants. These modeling advances include accurate cost and power modeling, partial wake interaction, and the effects of varying atmospheric stability. To validate the use of this advanced modeling system, it is employed within an Extended Pattern Search (EPS)-Multi-Agent System (MAS) optimization approach for multiple wind scenarios. The wind farm layout optimization problem involves optimizing the position and size of wind turbines such that the aerodynamic effects of upstream turbines are reduced, which increases the effective wind speed and resultant power at eachmore » turbine. The EPS-MAS optimization algorithm employs a profit objective, and an overarching search determines individual turbine positions, with a concurrent EPS-MAS determining the optimal hub height and rotor diameter for each turbine. Two wind cases are considered: (1) constant, unidirectional wind, and (2) three discrete wind speeds and varying wind directions, each of which have a probability of occurrence. Results show the advantages of applying the series of advanced models compared to previous application of an EPS with less advanced models to wind farm layout optimization, and imply best practices for computational optimization of wind farms with improved accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less
Gómez, Pablo; Patel, Rita R.; Alexiou, Christoph; Bohr, Christopher; Schützenberger, Anne
2017-01-01
Motivation Human voice is generated in the larynx by the two oscillating vocal folds. Owing to the limited space and accessibility of the larynx, endoscopic investigation of the actual phonatory process in detail is challenging. Hence the biomechanics of the human phonatory process are still not yet fully understood. Therefore, we adapt a mathematical model of the vocal folds towards vocal fold oscillations to quantify gender and age related differences expressed by computed biomechanical model parameters. Methods The vocal fold dynamics are visualized by laryngeal high-speed videoendoscopy (4000 fps). A total of 33 healthy young subjects (16 females, 17 males) and 11 elderly subjects (5 females, 6 males) were recorded. A numerical two-mass model is adapted to the recorded vocal fold oscillations by varying model masses, stiffness and subglottal pressure. For adapting the model towards the recorded vocal fold dynamics, three different optimization algorithms (Nelder–Mead, Particle Swarm Optimization and Simulated Bee Colony) in combination with three cost functions were considered for applicability. Gender differences and age-related kinematic differences reflected by the model parameters were analyzed. Results and conclusion The biomechanical model in combination with numerical optimization techniques allowed phonatory behavior to be simulated and laryngeal parameters involved to be quantified. All three optimization algorithms showed promising results. However, only one cost function seems to be suitable for this optimization task. The gained model parameters reflect the phonatory biomechanics for men and women well and show quantitative age- and gender-specific differences. The model parameters for younger females and males showed lower subglottal pressures, lower stiffness and higher masses than the corresponding elderly groups. Females exhibited higher subglottal pressures, smaller oscillation masses and larger stiffness than the corresponding similar aged male groups. Optimizing numerical models towards vocal fold oscillations is useful to identify underlying laryngeal components controlling the phonatory process. PMID:29121085
SYSTEMS ANALYSIS, * WATER SUPPLIES, MATHEMATICAL MODELS, OPTIMIZATION, ECONOMICS, LINEAR PROGRAMMING, HYDROLOGY, REGIONS, ALLOCATIONS, RESTRAINT, RIVERS, EVAPORATION, LAKES, UTAH, SALVAGE, MINES(EXCAVATIONS).
Dynamic analysis and optimal control for a model of hepatitis C with treatment
NASA Astrophysics Data System (ADS)
Zhang, Suxia; Xu, Xiaxia
2017-05-01
A model for hepatitis C is formulated to study the effects of treatment and public concern on HCV transmission dynamics. The stability of equilibria and persistence of the model are analyzed, and an optimal control measure is performed to prevent the spread of HCV with minimal infected individuals and cost. The dynamical analysis reveals that the disease-free equilibrium of the model is asymptotically stable if the basic reproductive number R0 is less than unity. On the other hand, if R0 > 1 , the disease is uniformly persistent. Numerical simulations are conducted to investigate the influence of different vital parameters on R0. For the corresponding optimality system, the optimal solution is discussed by Pontryagin Maximum Principle, and the comparisons of model-predicted consequences with control or not are presented.
A genetic algorithm for solving supply chain network design model
NASA Astrophysics Data System (ADS)
Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.
2013-09-01
Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.
Cost effectiveness of meniscal allograft for torn discoid lateral meniscus in young women.
Ramme, Austin J; Strauss, Eric J; Jazrawi, Laith; Gold, Heather T
2016-09-01
A discoid meniscus is more prone to tears than a normal meniscus. Patients with a torn discoid lateral meniscus are at increased risk for early onset osteoarthritis requiring total knee arthroplasty (TKA). Optimal management for this condition is controversial given the up-front cost difference between the two treatment options: the more expensive meniscal allograft transplantation compared with standard partial meniscectomy. We hypothesize that meniscal allograft transplantation following excision of a torn discoid lateral meniscus is more cost-effective compared with partial meniscectomy alone because allografts will extend the time to TKA. A decision analytic Markov model was created to compare the cost effectiveness of two treatments for symptomatic, torn discoid lateral meniscus: meniscal allograft and partial meniscectomy. Probability estimates and event rates were derived from the scientific literature, and costs and benefits were discounted by 3%. One-way sensitivity analyses were performed to test model robustness. Over 25 years, the partial meniscectomy strategy cost $10,430, whereas meniscal allograft cost on average $4040 more, at $14,470. Partial meniscectomy postponed TKA an average of 12.5 years, compared with 17.30 years for meniscal allograft, an increase of 4.8 years. Allograft cost $842 per-year-gained in time to TKA. Meniscal allografts have been shown to reduce pain and improve function in patients with discoid lateral meniscus tears. Though more costly, meniscal allografts may be more effective than partial meniscectomy in delaying TKA in this model. Additional future long term clinical studies will provide more insight into optimal surgical options.
Using a genetic algorithm to optimize a water-monitoring network for accuracy and cost effectiveness
NASA Astrophysics Data System (ADS)
Julich, R. J.
2004-05-01
The purpose of this project is to determine the optimal spatial distribution of water-monitoring wells to maximize important data collection and to minimize the cost of managing the network. We have employed a genetic algorithm (GA) towards this goal. The GA uses a simple fitness measure with two parts: the first part awards a maximal score to those combinations of hydraulic head observations whose net uncertainty is closest to the value representing all observations present, thereby maximizing accuracy; the second part applies a penalty function to minimize the number of observations, thereby minimizing the overall cost of the monitoring network. We used the linear statistical inference equation to calculate standard deviations on predictions from a numerical model generated for the 501-observation Death Valley Regional Flow System as the basis for our uncertainty calculations. We have organized the results to address the following three questions: 1) what is the optimal design strategy for a genetic algorithm to optimize this problem domain; 2) what is the consistency of solutions over several optimization runs; and 3) how do these results compare to what is known about the conceptual hydrogeology? Our results indicate the genetic algorithms are a more efficient and robust method for solving this class of optimization problems than have been traditional optimization approaches.
Data analytics and optimization of an ice-based energy storage system for commercial buildings
Luo, Na; Hong, Tianzhen; Li, Hui; ...
2017-07-25
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
Data analytics and optimization of an ice-based energy storage system for commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Na; Hong, Tianzhen; Li, Hui
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
NASA Astrophysics Data System (ADS)
Bryson, Dean Edward
A model's level of fidelity may be defined as its accuracy in faithfully reproducing a quantity or behavior of interest of a real system. Increasing the fidelity of a model often goes hand in hand with increasing its cost in terms of time, money, or computing resources. The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result either in costly changes or the acceptance of a configuration that does not meet expectations. The unknown opportunity cost is the discovery of superior vehicles that leverage phenomena unknown to the designer and not illuminated by low-fidelity tools. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. In building surrogate models, where mathematical expressions are used to cheaply approximate the behavior of costly data, low-fidelity models may be sampled extensively to resolve the underlying trend, while high-fidelity data are reserved to correct inaccuracies at key locations. Similarly, in design optimization a low-fidelity model may be queried many times in the search for new, better designs, with a high-fidelity model being exercised only once per iteration to evaluate the candidate design. In this dissertation, a new multifidelity, gradient-based optimization algorithm is proposed. It differs from the standard trust region approach in several ways, stemming from the new method maintaining an approximation of the inverse Hessian, that is the underlying curvature of the design problem. Whereas the typical trust region approach performs a full sub-optimization using the low-fidelity model at every iteration, the new technique finds a suitable descent direction and focuses the search along it, reducing the number of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy of both multifidelity methods compared to single-fidelity optimization depends significantly on the behavior of the high-fidelity model and the quality of the low-fidelity approximation, though savings are realized in a large number of cases. The multifidelity algorithm is also compared to the single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations. However, the enforcement of a contracting trust region slows the multifidelity progress. Even so, leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produced better designs in all four cases. Investigating the return on investment in terms of design improvement per computational hour confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to high-fidelity optimization is critical.
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2014-05-01
Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.
NASA Astrophysics Data System (ADS)
Zhiying, Chen; Ping, Zhou
2017-11-01
Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.