Sample records for general cost optimization

  1. Procedure for minimizing the cost per watt of photovoltaic systems

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance

  2. Theory and applications for optimization of every part of a photovoltaic system

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1978-01-01

    A general method is presented for quantitatively optimizing the design of every part and fabrication step of an entire photovoltaic system, based on the criterion of minimum cost/Watt for the system output power. It is shown that no element or process step can be optimized properly by considering only its own cost and performance. Moreover, a fractional performance loss at any fabrication step within the cell or array produces the same fractional increase in the cost/Watt of the entire array, but not of the full system. One general equation is found to be capable of optimizing all parts of a system, although the cell and array steps are basically different from the power-handling elements. Applications of this analysis are given to show (1) when Si wafers should be cut to increase their packing fraction; and (2) what the optimum dimensions for solar cell metallizations are. The optimum shadow fraction of the fine grid is shown to be independent of metal cost and resistivity as well as cell size. The optimum thicknesses of both the fine grid and the bus bar are substantially greater than the values in general use, and the total array cost has a major effect on these values. By analogy, this analysis is adaptable to other solar energy systems.

  3. Taxation of United States general aviation

    NASA Astrophysics Data System (ADS)

    Sobieralski, Joseph Bernard

    General aviation in the United States has been an important part of the economy and American life. General aviation is defined as all flying excluding military and scheduled airline operations, and is utilized in many areas of our society. The majority of aircraft operations and airports in the United States are categorized as general aviation, and general aviation contributes more than one percent to the United States gross domestic product each year. Despite the many benefits of general aviation, the lead emissions from aviation gasoline consumption are of great concern. General aviation emits over half the lead emissions in the United States or over 630 tons in 2005. The other significant negative externality attributed to general aviation usage is aircraft accidents. General aviation accidents have caused over 8000 fatalities over the period 1994-2006. A recent Federal Aviation Administration proposed increase in the aviation gasoline tax from 19.4 to 70.1 cents per gallon has renewed interest in better understanding the implications of such a tax increase as well as the possible optimal rate of taxation. Few studies have examined aviation fuel elasticities and all have failed to study general aviation fuel elasticities. Chapter one fills that gap and examines the elasticity of aviation gasoline consumption in United States general aviation. Utilizing aggregate time series and dynamic panel data, the price and income elasticities of demand are estimated. The price elasticity of demand for aviation gasoline is estimated to range from -0.093 to -0.185 in the short-run and from -0.132 to -0.303 in the long-run. These results prove to be similar in magnitude to automobile gasoline elasticities and therefore tax policies could more closely mirror those of automobile tax policies. The second chapter examines the costs associated with general aviation accidents. Given the large number of general aviation operations as well as the large number of fatalities and injuries attributed to general aviation accidents in the United States, understanding the costs to society is of great importance. This chapter estimates the direct and indirect costs associated with general aviation accidents in the United States. The indirect costs are estimated via the human capital approach in addition to the willingness-to-pay approach. The average annual accident costs attributed to general aviation are found to be 2.32 billion and 3.81 billion (2006 US) utilizing the human capital approach and willingness-to-pay approach, respectively. These values appear to be fairly robust when subjected to a sensitivity analysis. These costs highlight the large societal benefits from accident and fatality reduction. The final chapter derives a second-best optimal aviation gasoline tax developed from previous general equilibrium frameworks. This optimal tax reflects both the lead pollution and accident externalities, as well as the balance between excise taxes and labor taxes to finance government spending. The calculated optimal tax rate is 4.07 per gallon, which is over 20 times greater than the current tax rate and 5 times greater than the Federal Aviation Administration proposed tax rate. The calculated optimal tax rate is also over 3 times greater than automobile gasoline optimal tax rates calculated by previous studies. The Pigovian component is 1.36, and we observe that the accident externality is taxed more severely than the pollution externality. The largest component of the optimal tax rate is the Ramsey component. At 2.70, the Ramsey component reflects the ability of the government to raise revenue aviation gasoline which is price inelastic. The calculated optimal tax is estimated to reduce lead emissions by over 10 percent and reduce accidents by 20 percent. Although unlikely to be adopted by policy makers, the optimal tax benefits are apparent and it sheds light on the need to reduce these negative externalities via policy changes.

  4. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network

    PubMed Central

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. Summary We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding. PMID:29773979

  5. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network.

    PubMed

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding.

  6. A Decision-making Model for a Two-stage Production-delivery System in SCM Environment

    NASA Astrophysics Data System (ADS)

    Feng, Ding-Zhong; Yamashiro, Mitsuo

    A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.

  7. Behavior of Machine Learning Algorithms in Adversarial Environments

    DTIC Science & Technology

    2010-11-23

    handwriting recog- nition [cf., Plamondon and Srihari, 2000], they also have potentially far-reaching utility for many applications in security, networking...cost of the largest ℓp cost ball that fits entirely within their convex hull; let’s say this cost is C† ≤ C+0 . To achieve ǫ-multiplicative optimality...optimal on Fconvex,’+’ for ℓ2 costs. The proof of this result is in Appendix C.4. This result says that there is no algorithm can generally achieve ǫ

  8. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  9. Cost analysis of an electricity supply chain using modification of price based dynamic economic dispatch in wheeling transaction scheme

    NASA Astrophysics Data System (ADS)

    Wahyuda; Santosa, Budi; Rusdiansyah, Ahmad

    2018-04-01

    Deregulation of the electricity market requires coordination between parties to synchronize the optimization on the production side (power station) and the transport side (transmission). Electricity supply chain presented in this article is designed to facilitate the coordination between the parties. Generally, the production side is optimized with price based dynamic economic dispatch (PBDED) model, while the transmission side is optimized with Multi-echelon distribution model. Both sides optimization are done separately. This article proposes a joint model of PBDED and multi-echelon distribution for the combined optimization of production and transmission. This combined optimization is important because changes in electricity demand on the customer side will cause changes to the production side that automatically also alter the transmission path. The transmission will cause two cost components. First, the cost of losses. Second, the cost of using the transmission network (wheeling transaction). Costs due to losses are calculated based on ohmic losses, while the cost of using transmission lines using the MW - mile method. As a result, this method is able to provide best allocation analysis for electrical transactions, as well as emission levels in power generation and cost analysis. As for the calculation of transmission costs, the Reverse MW-mile method produces a cheaper cost than the Absolute MW-mile method

  10. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  11. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  12. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  13. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    NASA Astrophysics Data System (ADS)

    Quick, J.; Dykes, K.; Graf, P.; Zahle, F.

    2016-09-01

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained with increasing risk aversion on the part of the designer.

  14. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  15. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  16. A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2001-01-01

    An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.

  17. Using optimal transport theory to estimate transition probabilities in metapopulation dynamics

    USGS Publications Warehouse

    Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.

    2017-01-01

    This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.

  18. GMOtrack: generator of cost-effective GMO testing strategies.

    PubMed

    Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana

    2009-01-01

    Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.

  19. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, J.; Dykes, K.; Graf, P.

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less

  20. Optimization under Uncertainty of Site-Specific Turbine Configurations: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, Julian; Dykes, Katherine; Graf, Peter

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained withmore » increasing risk aversion on the part of the designer.« less

  1. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    DOE PAGES

    Quick, J.; Dykes, K.; Graf, P.; ...

    2016-10-03

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less

  2. Estimation of optimal educational cost per medical student.

    PubMed

    Yang, Eunbae B; Lee, Seunghee

    2009-09-01

    This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.

  3. Sedimentation and the Economics of Selecting an Optimum Reservoir Size

    NASA Astrophysics Data System (ADS)

    Miltz, David; White, David C.

    1987-08-01

    This paper attempts to develop an easily reproducible methodology for the economic selection of an optimal reservoir size given an annual sedimentation rate. The optimal capacity is that at which the marginal cost of constructing additional storage capacity is equal to the dredging costs avoided by having that additional capacity available to store sediment. The cost implications of misestimating dredging costs, construction costs, and sediment delivery rates are investigated. In general, it is shown that oversizing is a rational response to uncertainty in the estimation of parameters. The sensitivity of the results to alternative discount rates is also discussed. The theoretical discussion is illustrated with a case study drawn from Highland Silver Lake in southwestern Illinois.

  4. Stochastic optimal control as non-equilibrium statistical mechanics: calculus of variations over density and current

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Chertkov, Michael; Bierkens, Joris; Kappen, Hilbert J.

    2014-01-01

    In stochastic optimal control (SOC) one minimizes the average cost-to-go, that consists of the cost-of-control (amount of efforts), cost-of-space (where one wants the system to be) and the target cost (where one wants the system to arrive), for a system participating in forced and controlled Langevin dynamics. We extend the SOC problem by introducing an additional cost-of-dynamics, characterized by a vector potential. We propose derivation of the generalized gauge-invariant Hamilton-Jacobi-Bellman equation as a variation over density and current, suggest hydrodynamic interpretation and discuss examples, e.g., ergodic control of a particle-within-a-circle, illustrating non-equilibrium space-time complexity.

  5. Rethinking FCV/BEV Vehicle Range: A Consumer Value Trade-off Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhenhong; Greene, David L

    2010-01-01

    The driving range of FCV and BEV is often analyzed by simple analogy to conventional vehicles without proper consideration of differences in energy storage technology, infrastructure, and market context. This study proposes a coherent framework to optimize the driving range by minimizing costs associated with range, including upfront storage cost, fuel availability cost for FCV and range anxiety cost for BEV. It is shown that the conventional assumption of FCV range can lead to overestimation of FCV market barrier by over $8000 per vehicle in the near-term market. Such exaggeration of FCV market barrier can be avoided with range optimization.more » Compared to the optimal BEV range, the 100-mile range chosen by automakers appears to be near optimal for modest drivers, but far less than optimal for frequent drivers. With range optimization, the probability that the BEV is unable to serve a long-trip day is generally less than 5%, depending on driving intensity. Range optimization can help diversify BEV products for different consumers. It is also demonstrated and argued that the FCV/BEV range should adapt to the technology and infrastructure developments.« less

  6. Optimal short-range trajectories for helicopters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, G.L.; Erzberger, H.

    1982-12-01

    An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less

  7. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  8. Premium cost optimization of operational and maintenance of green building in Indonesia using life cycle assessment method

    NASA Astrophysics Data System (ADS)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Budiman, Rachmat; Riswanto

    2017-06-01

    Building has a big impact on the environmental developments. There are three general motives in building, namely the economy, society, and environment. Total completed building construction in Indonesia increased by 116% during 2009 to 2011. It made the energy consumption increased by 11% within the last three years. In fact, 70% of energy consumption is used for electricity needs on commercial buildings which leads to an increase of greenhouse gas emissions by 25%. Green Building cycle costs is known as highly building upfront cost in Indonesia. The purpose of optimization in this research improves building performance with some of green concept alternatives. Research methodology is mixed method of qualitative and quantitative approaches through questionnaire surveys and case study. Assessing the successful of optimization functions in the existing green building is based on the operational and maintenance phase with the Life Cycle Assessment Method. Choosing optimization results were based on the largest efficiency of building life cycle and the most effective cost to refund.

  9. Optimizing Teleportation Cost in Distributed Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh

    2018-03-01

    The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.

  10. Optimal dual-fuel propulsion for minimum inert weight or minimum fuel cost

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1973-01-01

    An analytical investigation of single-stage vehicles with multiple propulsion phases has been conducted with the phasing optimized to minimize a general cost function. Some results are presented for linearized sizing relationships which indicate that single-stage-to-orbit, dual-fuel rocket vehicles can have lower inert weight than similar single-fuel rocket vehicles and that the advantage of dual-fuel vehicles can be increased if a dual-fuel engine is developed. The results also indicate that the optimum split can vary considerably with the choice of cost function to be minimized.

  11. Computer-Aided Communication Satellite System Analysis and Optimization.

    ERIC Educational Resources Information Center

    Stagl, Thomas W.; And Others

    Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…

  12. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  13. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    EPA Science Inventory

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  14. Mathematical model of highways network optimization

    NASA Astrophysics Data System (ADS)

    Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.

    2017-12-01

    The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.

  15. Two-phase strategy of controlling motor coordination determined by task performance optimality.

    PubMed

    Shimansky, Yury P; Rand, Miya K

    2013-02-01

    A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.

  16. Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Deyu

    2012-03-01

    In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.

  17. Optimal cost-effective designs of Phase II proof of concept trials and associated go-no go decisions.

    PubMed

    Chen, Cong; Beckman, Robert A

    2009-01-01

    This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.

  18. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  19. Search for a new economic optimum in the management of household waste in Tiaret city (western Algeria).

    PubMed

    Asnoune, M; Abdelmalek, F; Djelloul, A; Mesghouni, K; Addou, A

    2016-11-01

    In household waste matters, the objective is always to conceive an optimal integrated system of management, where the terms 'optimal' and 'integrated' refer generally to a combination between the waste and the techniques of treatment, valorization and elimination, which often aim at the lowest possible cost. The management optimization of household waste using operational methodologies has not yet been applied in any Algerian district. We proposed an optimization of the valorization of household waste in Tiaret city in order to lower the total management cost. The methodology is modelled by non-linear mathematical equations using 28 variables of decision and aims to assign optimally the seven components of household waste (i.e. plastic, cardboard paper, glass, metals, textiles, organic matter and others) among four centres of treatment [i.e. waste to energy (WTE) or incineration, composting (CM), anaerobic digestion (ANB) or methanization and landfilling (LF)]. The analysis of the obtained results shows that the variation of total cost is mainly due to the assignment of waste among the treatment centres and that certain treatment cannot be applied to household waste in Tiaret city. On the other hand, certain techniques of valorization have been favoured by the optimization. In this work, four scenarios have been proposed to optimize the system cost, where the modelling shows that the mixed scenario (the three treatment centres CM, ANB, LF) suggests a better combination of technologies of waste treatment, with an optimal solution for the system (cost and profit). © The Author(s) 2016.

  20. A novel medical information management and decision model for uncertain demand optimization.

    PubMed

    Bi, Ya

    2015-01-01

    Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.

  1. XY vs X Mixer in Quantum Alternating Operator Ansatz for Optimization Problems with Constraints

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Rubin, Nicholas; Rieffel, Eleanor G.

    2018-01-01

    Quantum Approximate Optimization Algorithm, further generalized as Quantum Alternating Operator Ansatz (QAOA), is a family of algorithms for combinatorial optimization problems. It is a leading candidate to run on emerging universal quantum computers to gain insight into quantum heuristics. In constrained optimization, penalties are often introduced so that the ground state of the cost Hamiltonian encodes the solution (a standard practice in quantum annealing). An alternative is to choose a mixing Hamiltonian such that the constraint corresponds to a constant of motion and the quantum evolution stays in the feasible subspace. Better performance of the algorithm is speculated due to a much smaller search space. We consider problems with a constant Hamming weight as the constraint. We also compare different methods of generating the generalized W-state, which serves as a natural initial state for the Hamming-weight constraint. Using graph-coloring as an example, we compare the performance of using XY model as a mixer that preserves the Hamming weight with the performance of adding a penalty term in the cost Hamiltonian.

  2. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  3. Improving Learning Performance Through Rational Resource Allocation

    NASA Technical Reports Server (NTRS)

    Gratch, J.; Chien, S.; DeJong, G.

    1994-01-01

    This article shows how rational analysis can be used to minimize learning cost for a general class of statistical learning problems. We discuss the factors that influence learning cost and show that the problem of efficient learning can be cast as a resource optimization problem. Solutions found in this way can be significantly more efficient than the best solutions that do not account for these factors. We introduce a heuristic learning algorithm that approximately solves this optimization problem and document its performance improvements on synthetic and real-world problems.

  4. Green roof adoption in atlanta, georgia: the effects of building characteristics and subsidies on net private, public, and social benefits.

    PubMed

    Mullen, Jeffrey D; Lamsal, Madhur; Colson, Greg

    2013-10-01

    This research draws on and expands previous studies that have quantified the costs and benefits associated with conventional roofs versus green roofs. Using parameters from those studies to define alternative scenarios, we estimate from a private, public, and social perspective the costs and benefits of installing and maintaining an extensive green roof in Atlanta, GA. Results indicate net private benefits are a decreasing function of roof size and vary considerably across scenarios. In contrast, net public benefits are highly stable across scenarios, ranging from $32.49 to $32.90 m(-2). In addition, we evaluate two alternative subsidy regimes: (i) a general subsidy provided to every building that adopts a green roof and (ii) a targeted subsidy provided only to buildings for which net private benefits are negative but net public benefits are positive. In 6 of the 12 general subsidy scenarios the optimal public policy is not to offer a subsidy; in 5 scenarios the optimal subsidy rate is between $20 and $27 m(-2); and in 1 scenario the optimal rate is $5 m(-2). The optimal rate with a targeted subsidy is between $20 and $27 m(-2) in 11 scenarios and no subsidy is optimal in the twelfth. In most scenarios, a significant portion of net public benefits are generated by buildings for which net private benefits are positive. This suggests a policy focused on information dissemination and technical assistance may be more cost-effective than direct subsidy payments.

  5. A Cost-Effectiveness Analysis of Tactical Satellites, High-Altitude Long-Endurance Airships, and High and Medium Altitude Unmanned Aerial Systems for ISR and Communication Missions

    DTIC Science & Technology

    2008-09-01

    greatest coverage. Coverage rates will vary depending on the type of orbit. Orbits are generally optimized for the spacecraft mission. Imaging ... DETERMINING MEASURES OF EFFECTIVENESS (MOE) ............. 46 1. MOE Attributes...101 A. DETERMINING COST ALTERNATIVES ......................................... 101 1. All

  6. Real Time Optimal Control of Supercapacitor Operation for Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yusheng; Panwar, Mayank; Mohanpurkar, Manish

    2016-07-01

    Supercapacitors are gaining wider applications in power systems due to fast dynamic response. Utilizing supercapacitors by means of power electronics interfaces for power compensation is a proven effective technique. For applications such as requency restoration if the cost of supercapacitors maintenance as well as the energy loss on the power electronics interfaces are addressed. It is infeasible to use traditional optimization control methods to mitigate the impacts of frequent cycling. This paper proposes a Front End Controller (FEC) using Generalized Predictive Control featuring real time receding optimization. The optimization constraints are based on cost and thermal management to enhance tomore » the utilization efficiency of supercapacitors. A rigorous mathematical derivation is conducted and test results acquired from Digital Real Time Simulator are provided to demonstrate effectiveness.« less

  7. Optimal cooperative time-fixed impulsive rendezvous

    NASA Technical Reports Server (NTRS)

    Mirfakhraie, Koorosh; Conway, Bruce A.; Prussing, John E.

    1988-01-01

    A method has been developed for determining optimal, i.e., minimum fuel, trajectories for the fixed-time cooperative rendezvous of two spacecraft. The method presently assumes that the vehicles perform a total of three impulsive maneuvers with each vehicle being active, that is, making at least one maneuver. The cost of a feasible 'reference' trajectory is improved by an optimizer which uses an analytical gradient developed using primer vector theory and a new solution for the optimal terminal (rendezvous) maneuver. Results are presented for a large number of cases in which the initial orbits of both vehicles are circular but in which the initial positions of the vehicles and the allotted time for rendezvous are varied. In general, the cost of the cooperative rendezvous is less than that of rendezvous with one vehicle passive. Further improvement in cost may be obtained in the future when additional, i.e., midcourse, impulses are allowed and inserted as indicated for some cases by the primer vector histories which are generated by the program.

  8. Parameter Optimization of Pseudo-Rigid-Body Models of MRI-Actuated Catheters

    PubMed Central

    Greigarn, Tipakorn; Liu, Taoming; Çavuşoğlu, M. Cenk

    2016-01-01

    Simulation and control of a system containing compliant mechanisms such as cardiac catheters often incur high computational costs. One way to reduce the costs is to approximate the mechanisms with Pseudo-Rigid-Body Models (PRBMs). A PRBM generally consists of rigid links connected by spring-loaded revolute joints. The lengths of the rigid links and the stiffnesses of the springs are usually chosen to minimize the tip deflection differences between the PRBM and the compliant mechanism. In most applications, only the relationship between end load and tip deflection is considered. This is obviously not applicable for MRI-actuated catheters which is actuated by the coils attached to the body. This paper generalizes PRBM parameter optimization to include loading and reference points along the body. PMID:28261009

  9. Allogeneic cell therapy bioprocess economics and optimization: downstream processing decisions.

    PubMed

    Hassan, Sally; Simaria, Ana S; Varadaraju, Hemanthram; Gupta, Siddharth; Warren, Kim; Farid, Suzanne S

    2015-01-01

    To develop a decisional tool to identify the most cost effective process flowsheets for allogeneic cell therapies across a range of production scales. A bioprocess economics and optimization tool was built to assess competing cell expansion and downstream processing (DSP) technologies. Tangential flow filtration was generally more cost-effective for the lower cells/lot achieved in planar technologies and fluidized bed centrifugation became the only feasible option for handling large bioreactor outputs. DSP bottlenecks were observed at large commercial lot sizes requiring multiple large bioreactors. The DSP contribution to the cost of goods/dose ranged between 20-55%, and 50-80% for planar and bioreactor flowsheets, respectively. This analysis can facilitate early decision-making during process development.

  10. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-11-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  11. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  12. Payment mechanism and GP self-selection: capitation versus fee for service.

    PubMed

    Allard, Marie; Jelovac, Izabela; Léger, Pierre-Thomas

    2014-06-01

    This paper analyzes the consequences of allowing gatekeeping general practitioners (GPs) to select their payment mechanism. We model GPs' behavior under the most common payment schemes (capitation and fee for service) and when GPs can select one among them. Our analysis considers GP heterogeneity in terms of both ability and concern for their patients' health. We show that when the costs of wasteful referrals to costly specialized care are relatively high, fee for service payments are optimal to maximize the expected patients' health net of treatment costs. Conversely, when the losses associated with failed referrals of severely ill patients are relatively high, we show that either GPs' self-selection of a payment form or capitation is optimal. Last, we extend our analysis to endogenous effort and to competition among GPs. In both cases, we show that self-selection is never optimal.

  13. General Algebraic Modeling System Tutorial | High-Performance Computing |

    Science.gov Websites

    power generation from two different fuels. The goal is to minimize the cost for one of the fuels while Here's a basic tutorial for modeling optimization problems with the General Algebraic Modeling System (GAMS). Overview The GAMS (General Algebraic Modeling System) package is essentially a compiler for a

  14. Optimization of solar cell contacts by system cost-per-watt minimization

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.

  15. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  16. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    An optimizing computer program determined the turboprop aircraft with lowest direct operating cost for various sets of cruise speed and field length constraints. External variables included wing area, wing aspect ratio and engine sea level static horsepower; tail sizes, climb speed and cruise altitude were varied within the function evaluation program. Direct operating cost was minimized for a 150 n.mi typical mission. Generally, DOC increased with increasing speed and decreasing field length but not by a large amount. Ride roughness, however, increased considerably as speed became higher and field length became shorter.

  17. Gain optimization with non-linear controls

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Kandadai, R. D.

    1984-01-01

    An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.

  18. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  19. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  20. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  1. Welfare implications of energy and environmental policies: A general equilibrium approach

    NASA Astrophysics Data System (ADS)

    Iqbal, Mohammad Qamar

    Government intervention and implementation of policies can impose a financial and social cost. To achieve a desired goal there could be several different alternative policies or routes, and government would like to choose the one which imposes the least social costs or/and generates greater social benefits. Therefore, applied welfare economics plays a vital role in public decision making. This paper recasts welfare measure such as equivalent variation, in terms of the prices of factors of production rather than product prices. This is made possible by using duality theory within a general equilibrium framework and by deriving alternative forms of indirect utility functions and expenditure functions in factor prices. Not only we are able to recast existing welfare measures in factor prices, we are able to perform a true cost-benefit analysis of government policies using comparative static analysis of different equilibria and breaking up monetary measure of welfare change such as equivalent variation into its components. A further advantage of our research is demonstrated by incorporating externalities and public goods in the utility function. It is interesting that under a general equilibrium framework optimal income tax tends to reduce inequalities. Results show that imposition of taxes at socially optimal rates brings a net gain to the society. It was also seen that even though a pollution tax may reduce GDP, it leads to an increase in the welfare of the society if it is imposed at an optimal rate.

  2. Cost-efficiency trade-off and the design of thermoelectric power generators.

    PubMed

    Yazawa, Kazuaki; Shakouri, Ali

    2011-09-01

    The energy conversion efficiency of today's thermoelectric generators is significantly lower than that of conventional mechanical engines. Almost all of the existing research is focused on materials to improve the conversion efficiency. Here we propose a general framework to study the cost-efficiency trade-off for thermoelectric power generation. A key factor is the optimization of thermoelectric modules together with their heat source and heat sinks. Full electrical and thermal co-optimization yield a simple analytical expression for optimum design. Based on this model, power output per unit mass can be maximized. We show that the fractional area coverage of thermoelectric elements in a module could play a significant role in reducing the cost of power generation systems.

  3. Costs optimization in anaesthesia.

    PubMed

    Martelli, Alessandra

    2015-04-27

    The aim of this study is to analyze the direct cost of different anaesthetic techniques used within the Author's hospital setting and compare with costs reported in the literature. Mean cost of drugs and devices used in our local Department of Anaesthesia was considered in the present study. All drugs were supplied by the in-house Pharmacy Service of Parma's General Hospital. All calculation have been made using an hypothetical ASA1 patient weighting 70 kg. The quality of consumption and cost of inhalation anaesthesia with sevoflurane or desflurane at different fresh gas flow were analyzed, and the cost of total venous anaesthesia (TIVA) using propofol and remifentanil with balanced anaesthesia were also analyzed. In addition, direct costs of general, spinal and sciatic-femoral nerve block anaesthesia used for common plastic surgery procedures were assessed. The results of our study show that the cost of inhalational anaesthesia decreases using fresh gas flow below 1L, and the use of desflurane is more expensive. In our Hospital, the cost of TIVA is more or less equivalent to the costs of balanced anaesthesia with sevoflurane in surgical procedure lasting more than five hours. The direct cost was lower for the spinal anaesthesia compared with general anaesthesia and sciatic- femoral nerve block for some surgical procedures. (www.actabiomedica.it).

  4. Bioreactor performance: a more scientific approach for practice.

    PubMed

    Lübbert, A; Bay Jørgensen, S

    2001-02-13

    In practice, the performance of a biochemical conversion process, i.e. the bioreactor performance, is essentially determined by the benefit/cost ratio. The benefit is generally defined in terms of the amount of the desired product produced and its market price. Cost reduction is the major objective in biochemical engineering. There are two essential engineering approaches to minimizing the cost of creating a particular product in an existing plant. One is to find a control path or operational procedure that optimally uses the dynamics of the process and copes with the many constraints restricting production. The other is to remove or lower the constraints by constructive improvements of the equipment and/or the microorganisms. This paper focuses on the first approach, dealing with optimization of the operational procedure and the measures by which one can ensure that the process adheres to the predetermined path. In practice, feedforward control is the predominant control mode applied. However, as it is frequently inadequate for optimal performance, feedback control may also be employed. Relevant aspects of such performance optimization are discussed.

  5. The Inactivation Principle: Mathematical Solutions Minimizing the Absolute Work and Biological Implications for the Planning of Arm Movements

    PubMed Central

    Berret, Bastien; Darlot, Christian; Jean, Frédéric; Pozzo, Thierry; Papaxanthis, Charalambos; Gauthier, Jean Paul

    2008-01-01

    An important question in the literature focusing on motor control is to determine which laws drive biological limb movements. This question has prompted numerous investigations analyzing arm movements in both humans and monkeys. Many theories assume that among all possible movements the one actually performed satisfies an optimality criterion. In the framework of optimal control theory, a first approach is to choose a cost function and test whether the proposed model fits with experimental data. A second approach (generally considered as the more difficult) is to infer the cost function from behavioral data. The cost proposed here includes a term called the absolute work of forces, reflecting the mechanical energy expenditure. Contrary to most investigations studying optimality principles of arm movements, this model has the particularity of using a cost function that is not smooth. First, a mathematical theory related to both direct and inverse optimal control approaches is presented. The first theoretical result is the Inactivation Principle, according to which minimizing a term similar to the absolute work implies simultaneous inactivation of agonistic and antagonistic muscles acting on a single joint, near the time of peak velocity. The second theoretical result is that, conversely, the presence of non-smoothness in the cost function is a necessary condition for the existence of such inactivation. Second, during an experimental study, participants were asked to perform fast vertical arm movements with one, two, and three degrees of freedom. Observed trajectories, velocity profiles, and final postures were accurately simulated by the model. In accordance, electromyographic signals showed brief simultaneous inactivation of opposing muscles during movements. Thus, assuming that human movements are optimal with respect to a certain integral cost, the minimization of an absolute-work-like cost is supported by experimental observations. Such types of optimality criteria may be applied to a large range of biological movements. PMID:18949023

  6. Starship Sails Propelled by Cost-Optimized Directed Energy

    NASA Astrophysics Data System (ADS)

    Benford, J.

    Microwave and laser-propelled sails are a new class of spacecraft using photon acceleration. It is the only method of interstellar flight that has no physics issues. Laboratory demonstrations of basic features of beam-driven propulsion, flight, stability (`beam-riding'), and induced spin, have been completed in the last decade, primarily in the microwave. It offers much lower cost probes after a substantial investment in the launcher. Engineering issues are being addressed by other applications: fusion (microwave, millimeter and laser sources) and astronomy (large aperture antennas). There are many candidate sail materials: carbon nanotubes and microtrusses, beryllium, graphene, etc. For acceleration of a sail, what is the cost-optimum high power system? Here the cost is used to constrain design parameters to estimate system power, aperture and elements of capital and operating cost. From general relations for cost-optimal transmitter aperture and power, system cost scales with kinetic energy and inversely with sail diameter and frequency. So optimal sails will be larger, lower in mass and driven by higher frequency beams. Estimated costs include economies of scale. We present several starship point concepts. Systems based on microwave, millimeter wave and laser technologies are of equal cost at today's costs. The frequency advantage of lasers is cancelled by the high cost of both the laser and the radiating optic. Cost of interstellar sailships is very high, driven by current costs for radiation source, antennas and especially electrical power. The high speeds necessary for fast interstellar missions make the operating cost exceed the capital cost. Such sailcraft will not be flown until the cost of electrical power in space is reduced orders of magnitude below current levels.

  7. The option value of delay in health technology assessment.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2008-01-01

    Processes of health technology assessment (HTA) inform decisions under uncertainty about whether to invest in new technologies based on evidence of incremental effects, incremental cost, and incremental net benefit monetary (INMB). An option value to delaying such decisions to wait for further evidence is suggested in the usual case of interest, in which the prior distribution of INMB is positive but uncertain. of estimating the option value of delaying decisions to invest have previously been developed when investments are irreversible with an uncertain payoff over time and information is assumed fixed. However, in HTA decision uncertainty relates to information (evidence) on the distribution of INMB. This article demonstrates that the option value of delaying decisions to allow collection of further evidence can be estimated as the expected value of sample of information (EVSI). For irreversible decisions, delay and trial (DT) is demonstrated to be preferred to adopt and no trial (AN) when the EVSI exceeds expected costs of information, including expected opportunity costs of not treating patients with the new therapy. For reversible decisions, adopt and trial (AT) becomes a potentially optimal strategy, but costs of reversal are shown to reduce the EVSI of this strategy due to both a lower probability of reversal being optimal and lower payoffs when reversal is optimal. Hence, decision makers are generally shown to face joint research and reimbursement decisions (AN, DT and AT), with the optimal choice dependent on costs of reversal as well as opportunity costs of delay and the distribution of prior INMB.

  8. Incorporation of Fixed Installation Costs into Optimization of Groundwater Remediation with a New Efficient Surrogate Nonlinear Mixed Integer Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Shoemaker, Christine; Wan, Ying

    2016-04-01

    Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).

  9. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  10. Management of unmanned moving sensors through human decision layers: a bi-level optimization process with calls to costly sub-processes

    NASA Astrophysics Data System (ADS)

    Dambreville, Frédéric

    2013-10-01

    While there is a variety of approaches and algorithms for optimizing the mission of an unmanned moving sensor, there are much less works which deal with the implementation of several sensors within a human organization. In this case, the management of the sensors is done through at least one human decision layer, and the sensors management as a whole arises as a bi-level optimization process. In this work, the following hypotheses are considered as realistic: Sensor handlers of first level plans their sensors by means of elaborated algorithmic tools based on accurate modelling of the environment; Higher level plans the handled sensors according to a global observation mission and on the basis of an approximated model of the environment and of the first level sub-processes. This problem is formalized very generally as the maximization of an unknown function, defined a priori by sampling a known random function (law of model error). In such case, each actual evaluation of the function increases the knowledge about the function, and subsequently the efficiency of the maximization. The issue is to optimize the sequence of value to be evaluated, in regards to the evaluation costs. There is here a fundamental link with the domain of experiment design. Jones, Schonlau and Welch proposed a general method, the Efficient Global Optimization (EGO), for solving this problem in the case of additive functional Gaussian law. In our work, a generalization of the EGO is proposed, based on a rare event simulation approach. It is applied to the aforementioned bi-level sensor planning.

  11. Organization of an optimal adaptive immune system

    NASA Astrophysics Data System (ADS)

    Walczak, Aleksandra; Mayer, Andreas; Balasubramanian, Vijay; Mora, Thierry

    The repertoire of lymphocyte receptors in the adaptive immune system protects organisms from a diverse set of pathogens. A well-adapted repertoire should be tuned to the pathogenic environment to reduce the cost of infections. I will discuss a general framework for predicting the optimal repertoire that minimizes the cost of infections contracted from a given distribution of pathogens. The theory predicts that the immune system will have more receptors for rare antigens than expected from the frequency of encounters and individuals exposed to the same infections will have sparse repertoires that are largely different, but nevertheless exploit cross-reactivity to provide the same coverage of antigens. I will show that the optimal repertoires can be reached by dynamics that describes the competitive binding of antigens by receptors, and selective amplification of stimulated receptors.

  12. Energy aware swarm optimization with intercluster search for wireless sensor network.

    PubMed

    Thilagavathi, Shanmugasundaram; Geetha, Bhavani Gnanasambandan

    2015-01-01

    Wireless sensor networks (WSNs) are emerging as a low cost popular solution for many real-world challenges. The low cost ensures deployment of large sensor arrays to perform military and civilian tasks. Generally, WSNs are power constrained due to their unique deployment method which makes replacement of battery source difficult. Challenges in WSN include a well-organized communication platform for the network with negligible power utilization. In this work, an improved binary particle swarm optimization (PSO) algorithm with modified connected dominating set (CDS) based on residual energy is proposed for discovery of optimal number of clusters and cluster head (CH). Simulations show that the proposed BPSO-T and BPSO-EADS perform better than LEACH- and PSO-based system in terms of energy savings and QOS.

  13. Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis

    NASA Astrophysics Data System (ADS)

    Lozovanu, D.; Pickl, S. W.; Weber, G.-W.

    2004-08-01

    This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.

  14. The cost of quality: Implementing generalization and suppression for anonymizing biomedical data with minimal information loss.

    PubMed

    Kohlmayer, Florian; Prasser, Fabian; Kuhn, Klaus A

    2015-12-01

    With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less

  16. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  17. Cost effective campaigning in social networks

    NASA Astrophysics Data System (ADS)

    Kotnis, Bhushan; Kuri, Joy

    2016-05-01

    Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind.

  18. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    NASA Astrophysics Data System (ADS)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.

  19. Assessing Cost-effectiveness of Green Infrastructures in response to Large Storm Events at Household Scale

    NASA Astrophysics Data System (ADS)

    Chui, T. F. M.; Liu, X.; Zhan, W.

    2015-12-01

    Green infrastructures (GI) are becoming more important for urban stormwater control worldwide. However, relatively few studies focus on researching the specific designs of GI at household scale. This study assesses the hydrological performance and cost-effectiveness of different GI designs, namely green roofs, bioretention systems and porous pavements. It aims to generate generic insights by comparing the optimal designs of each GI in 2-year and 50-year storms of Hong Kong, China and Seattle, US. EPA SWMM is first used to simulate the hydrologic performance, in particular, the peak runoff reduction of thousands of GI designs. Then, life cycle costs of the designs are computed and their effectiveness, in terms of peak runoff reduction percentage per thousand dollars, is compared. The peak runoff reduction increases almost linearly with costs for green roofs. However, for bioretention systems and porous pavements, peak runoff reduction only increases significantly with costs in the mid values. For achieving the same peak runoff reduction percentage, the optimal soil depth of green roofs increases with the design storm, while surface area does not change significantly. On the other hand, for bioretention systems and porous pavements, the optimal surface area increases with the design storm, while thickness does not change significantly. In general, the cost effectiveness of porous pavements is highest, followed by bioretention systems and then green roofs. The cost effectiveness is higher for a smaller storm, and is thus higher for 2-year storm than 50-year storm, and is also higher for Seattle when compared to Hong Kong. This study allows us to better understand the hydrological performance and cost-effectiveness of different GI designs. It facilitates the implementation of optimal choice and design of each specific GI for stormwater mitigation.

  20. Culture Moderates Biases in Search Decisions.

    PubMed

    Pattaratanakun, Jake A; Mak, Vincent

    2015-08-01

    Prior studies suggest that people often search insufficiently in sequential-search tasks compared with the predictions of benchmark optimal strategies that maximize expected payoff. However, those studies were mostly conducted in individualist Western cultures; Easterners from collectivist cultures, with their higher susceptibility to escalation of commitment induced by sunk search costs, could exhibit a reversal of this undersearch bias by searching more than optimally, but only when search costs are high. We tested our theory in four experiments. In our pilot experiment, participants generally undersearched when search cost was low, but only Eastern participants oversearched when search cost was high. In Experiments 1 and 2, we obtained evidence for our hypothesized effects via a cultural-priming manipulation on bicultural participants in which we manipulated the language used in the program interface. We obtained further process evidence for our theory in Experiment 3, in which we made sunk costs nonsalient in the search task-as expected, cross-cultural effects were largely mitigated. © The Author(s) 2015.

  1. Generalized networking engineering: optimal pricing and routing in multiservice networks

    NASA Astrophysics Data System (ADS)

    Mitra, Debasis; Wang, Qiong

    2002-07-01

    One of the functions of network engineering is to allocate resources optimally to forecasted demand. We generalize the mechanism by incorporating price-demand relationships into the problem formulation, and optimizing pricing and routing jointly to maximize total revenue. We consider a network, with fixed topology and link bandwidths, that offers multiple services, such as voice and data, each having characteristic price elasticity of demand, and quality of service and policy requirements on routing. Prices, which depend on service type and origin-destination, determine demands, that are routed, subject to their constraints, so as to maximize revenue. We study the basic properties of the optimal solution and prove that link shadow costs provide the basis for both optimal prices and optimal routing policies. We investigate the impact of input parameters, such as link capacities and price elasticities, on prices, demand growth, and routing policies. Asymptotic analyses, in which network bandwidth is scaled to grow, give results that are noteworthy for their qualitative insights. Several numerical examples illustrate the analyses.

  2. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Economic optimization of natural hazard protection - conceptual study of existing approaches

    NASA Astrophysics Data System (ADS)

    Spackova, Olga; Straub, Daniel

    2013-04-01

    Risk-based planning of protection measures against natural hazards has become a common practice in many countries. The selection procedure aims at identifying an economically efficient strategy with regard to the estimated costs and risk (i.e. expected damage). A correct setting of the evaluation methodology and decision criteria should ensure an optimal selection of the portfolio of risk protection measures under a limited state budget. To demonstrate the efficiency of investments, indicators such as Benefit-Cost Ratio (BCR), Marginal Costs (MC) or Net Present Value (NPV) are commonly used. However, the methodologies for efficiency evaluation differ amongst different countries and different hazard types (floods, earthquakes etc.). Additionally, several inconsistencies can be found in the applications of the indicators in practice. This is likely to lead to a suboptimal selection of the protection strategies. This study provides a general formulation for optimization of the natural hazard protection measures from a socio-economic perspective. It assumes that all costs and risks can be expressed in monetary values. The study regards the problem as a discrete hierarchical optimization, where the state level sets the criteria and constraints, while the actual optimization is made on the regional level (towns, catchments) when designing particular protection measures and selecting the optimal protection level. The study shows that in case of an unlimited budget, the task is quite trivial, as it is sufficient to optimize the protection measures in individual regions independently (by minimizing the sum of risk and cost). However, if the budget is limited, the need for an optimal allocation of resources amongst the regions arises. To ensure this, minimum values of BCR or MC can be required by the state, which must be achieved in each region. The study investigates the meaning of these indicators in the optimization task at the conceptual level and compares their suitability. To illustrate the theoretical findings, the indicators are tested on a hypothetical example of five regions with different risk levels. Last but not least, political and societal aspects and limitations in the use of the risk-based optimization framework are discussed.

  4. Performance Analysis and Optimization on the UCLA Parallel Atmospheric General Circulation Model Code

    NASA Technical Reports Server (NTRS)

    Lou, John; Ferraro, Robert; Farrara, John; Mechoso, Carlos

    1996-01-01

    An analysis is presented of several factors influencing the performance of a parallel implementation of the UCLA atmospheric general circulation model (AGCM) on massively parallel computer systems. Several modificaitons to the original parallel AGCM code aimed at improving its numerical efficiency, interprocessor communication cost, load-balance and issues affecting single-node code performance are discussed.

  5. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  6. Optimizing value utilizing Toyota Kata methodology in a multidisciplinary clinic.

    PubMed

    Merguerian, Paul A; Grady, Richard; Waldhausen, John; Libby, Arlene; Murphy, Whitney; Melzer, Lilah; Avansino, Jeffrey

    2015-08-01

    Value in healthcare is measured in terms of patient outcomes achieved per dollar expended. Outcomes and cost must be measured at the patient level to optimize value. Multidisciplinary clinics have been shown to be effective in providing coordinated and comprehensive care with improved outcomes, yet tend to have higher cost than typical clinics. We sought to lower individual patient cost and optimize value in a pediatric multidisciplinary reconstructive pelvic medicine (RPM) clinic. The RPM clinic is a multidisciplinary clinic that takes care of patients with anomalies of the pelvic organs. The specialties involved include Urology, General Surgery, Gynecology, and Gastroenterology/Motility. From May 2012 to November 2014 we performed time-driven activity-based costing (TDABC) analysis by measuring provider time for each step in the patient flow. Using observed time and the estimated hourly cost of each of the providers we calculated the final cost at the individual patient level, targeting clinic preparation. We utilized Toyota Kata methodology to enhance operational efficiency in an effort to optimize value. Variables measured included cost, time to perform a task, number of patients seen in clinic, percent value-added time (VAT) to patients (face to face time) and family experience scores (FES). At the beginning of the study period, clinic costs were $619 per patient. We reduced conference time from 6 min/patient to 1 min per patient, physician preparation time from 8 min to 6 min and increased Medical Assistant (MA) preparation time from 9.5 min to 20 min, achieving a cost reduction of 41% to $366 per patient. Continued improvements further reduced the MA preparation time to 14 min and the MD preparation time to 5 min with a further cost reduction to $194 (69%) (Figure). During this study period, we increased the number of appointments per clinic. We demonstrated sustained improvement in FES with regards to the families overall experience with their providers. Value added time was increased from 60% to 78% but this was not significant. Time-based cost analysis effectively measures individualized patient cost. We achieved a 69% reduction in clinic preparation costs. Despite this reduction in costs, we were able to maintain VAT and sustain improvements in family experience. In caring for complex patients, lean management methodology enables optimization of value in a multidisciplinary clinic. Copyright © 2015. Published by Elsevier Ltd.

  7. Globally optimal trial design for local decision making.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2009-02-01

    Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials. Copyright (c) 2008 John Wiley & Sons, Ltd.

  8. 1L Mark-IV Target Design Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koehler, Paul E.

    This presentation includes General Design Considerations; Current (Mark-III) Lower Tier; Mark-III Upper Tier; Performance Metrics; General Improvements for Material Science; General Improvements for Nuclear Science; Improving FOM for Nuclear Science; General Design Considerations Summary; Design Optimization Studies; Expected Mark-IV Performance: Material Science; Expected Mark-IV Performance: Nuclear Science (Disk); Mark IV Enables Much Wider Range of Nuclear-Science FOM Gains than Mark III; Mark-IV Performance Summary; Rod or Disk? Center or Real FOV?; and Project Cost and Schedule.

  9. Exploration cost-cutting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huttrer, J.

    This presentation by Jerry Huttrer, President, Geothermal Management Company, discusses the general state of exploration in the geothermal industry today, and mentions some ways to economize and perhaps save costs of geothermal exploration in the future. He suggests an increased use of satellite imagery in the mapping of geothermal resources and the identification of hot spots. Also, coordinating with oil and gas exploration efforts, the efficiency of the exploration task could be optimized.

  10. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  11. [A program for optimizing the use of antimicrobials (PROA): experience in a regional hospital].

    PubMed

    Ugalde-Espiñeira, J; Bilbao-Aguirregomezcorta, J; Sanjuan-López, A Z; Floristán-Imízcoz, C; Elorduy-Otazua, L; Viciola-García, M

    2016-08-01

    Programs for optimizing the use of antibiotics (PROA) or antimicrobial stewardship programs are multidisciplinary programs developed in response to the increase of antibiotic resistant bacteria, the objective of which are to improve clinical results, to minimize adverse events and to reduce costs associated with the use of antimicrobials. The implementation of a PROA program in a 128-bed general hospital and the results obtained at 6 months are here reported. An intervention quasi-experimental study with historical control group was designed with the objective of assessing the impact of a PROA program with a non-restrictive intervention model to help prescription, with a direct and bidirectional intervention. The basis of the program is an optimization audit of the use of antimicrobials with not imposed personalized recommendations and the use of information technologies applied to this setting. The impact on the pharmaceutical consumption and costs, cost per process, mean hospital stay, percentage of readmissions to the hospital are described. A total of 307 audits were performed. In 65.8% of cases, treatment was discontinued between the 7th and the 10th day. The main reasons of treatment discontinuation were completeness of treatment (43.6%) and lack of indication (14.7%). The reduction of pharmaceutical expenditure was 8.59% (P = 0.049) and 5.61% of the consumption in DDD/100 stays (P=0.180). The costs by processes in general surgery showed a 3.14% decrease (p=0.000). The results obtained support the efficiency of these programs in small size hospitals with limited resources.

  12. The metabolic cost of changing walking speeds is significant, implies lower optimal speeds for shorter distances, and increases daily energy estimates.

    PubMed

    Seethapathi, Nidhi; Srinivasan, Manoj

    2015-09-01

    Humans do not generally walk at constant speed, except perhaps on a treadmill. Normal walking involves starting, stopping and changing speeds, in addition to roughly steady locomotion. Here, we measure the metabolic energy cost of walking when changing speed. Subjects (healthy adults) walked with oscillating speeds on a constant-speed treadmill, alternating between walking slower and faster than the treadmill belt, moving back and forth in the laboratory frame. The metabolic rate for oscillating-speed walking was significantly higher than that for constant-speed walking (6-20% cost increase for ±0.13-0.27 m s(-1) speed fluctuations). The metabolic rate increase was correlated with two models: a model based on kinetic energy fluctuations and an inverted pendulum walking model, optimized for oscillating-speed constraints. The cost of changing speeds may have behavioural implications: we predicted that the energy-optimal walking speed is lower for shorter distances. We measured preferred human walking speeds for different walking distances and found people preferred lower walking speeds for shorter distances as predicted. Further, analysing published daily walking-bout distributions, we estimate that the cost of changing speeds is 4-8% of daily walking energy budget. © 2015 The Author(s).

  13. The metabolic cost of changing walking speeds is significant, implies lower optimal speeds for shorter distances, and increases daily energy estimates

    PubMed Central

    Seethapathi, Nidhi; Srinivasan, Manoj

    2015-01-01

    Humans do not generally walk at constant speed, except perhaps on a treadmill. Normal walking involves starting, stopping and changing speeds, in addition to roughly steady locomotion. Here, we measure the metabolic energy cost of walking when changing speed. Subjects (healthy adults) walked with oscillating speeds on a constant-speed treadmill, alternating between walking slower and faster than the treadmill belt, moving back and forth in the laboratory frame. The metabolic rate for oscillating-speed walking was significantly higher than that for constant-speed walking (6–20% cost increase for ±0.13–0.27 m s−1 speed fluctuations). The metabolic rate increase was correlated with two models: a model based on kinetic energy fluctuations and an inverted pendulum walking model, optimized for oscillating-speed constraints. The cost of changing speeds may have behavioural implications: we predicted that the energy-optimal walking speed is lower for shorter distances. We measured preferred human walking speeds for different walking distances and found people preferred lower walking speeds for shorter distances as predicted. Further, analysing published daily walking-bout distributions, we estimate that the cost of changing speeds is 4–8% of daily walking energy budget. PMID:26382072

  14. Optimization of municipal solid waste collection and transportation routes.

    PubMed

    Das, Swapan; Bhattacharyya, Bidyut Kr

    2015-09-01

    Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scatter throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Development of Activity-based Cost Functions for Cellulase, Invertase, and Other Enzymes

    NASA Astrophysics Data System (ADS)

    Stowers, Chris C.; Ferguson, Elizabeth M.; Tanner, Robert D.

    As enzyme chemistry plays an increasingly important role in the chemical industry, cost analysis of these enzymes becomes a necessity. In this paper, we examine the aspects that affect the cost of enzymes based upon enzyme activity. The basis for this study stems from a previously developed objective function that quantifies the tradeoffs in enzyme purification via the foam fractionation process (Cherry et al., Braz J Chem Eng 17:233-238, 2000). A generalized cost function is developed from our results that could be used to aid in both industrial and lab scale chemical processing. The generalized cost function shows several nonobvious results that could lead to significant savings. Additionally, the parameters involved in the operation and scaling up of enzyme processing could be optimized to minimize costs. We show that there are typically three regimes in the enzyme cost analysis function: the low activity prelinear region, the moderate activity linear region, and high activity power-law region. The overall form of the cost analysis function appears to robustly fit the power law form.

  16. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    The relationships between field length and cruise speed and aircraft direct operating cost were determined. A gradient optimizing computer program was developed to minimize direct operating cost (DOC) as a function of airplane geometry. In this way, the best airplane operating under one set of constraints can be compared with the best operating under another. A constant 30-passenger fuselage and rubberized engines based on the General Electric CT-7 were used as a baseline. All aircraft had to have a 600 nautical mile maximum range and were designed to FAR part 25 structural integrity and climb gradient regulations. Direct operating cost was minimized for a typical design mission of 150 nautical miles. For purposes of C sub L sub max calculation, all aircraft had double-slotted flaps but with no Fowler action.

  17. Optimization of municipal solid waste collection and transportation routes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Swapan, E-mail: swapan2009sajal@gmail.com; Bhattacharyya, Bidyut Kr., E-mail: bidyut53@yahoo.co.in

    2015-09-15

    Graphical abstract: Display Omitted - Highlights: • Profitable integrated solid waste management system. • Optimal municipal waste collection scheme between the sources and waste collection centres. • Optimal path calculation between waste collection centres and transfer stations. • Optimal waste routing between the transfer stations and processing plants. - Abstract: Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scattermore » throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length.« less

  18. Computer programs for generation and evaluation of near-optimum vertical flight profiles

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Waters, M. H.; Patmore, L. C.

    1983-01-01

    Two extensive computer programs were developed. The first, called OPTIM, generates a reference near-optimum vertical profile, and it contains control options so that the effects of various flight constraints on cost performance can be examined. The second, called TRAGEN, is used to simulate an aircraft flying along an optimum or any other vertical reference profile. TRAGEN is used to verify OPTIM's output, examine the effects of uncertainty in the values of parameters (such as prevailing wind) which govern the optimum profile, or compare the cost performance of profiles generated by different techniques. A general description of these programs, the efforts to add special features to them, and sample results of their usage are presented.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yazawa, Kazuaki; Shakouri, Ali

    The energy conversion efficiency of today’s thermoelectric generators is significantly lower than that of conventional mechanical engines. Almost all of the existing research is focused on materials to improve the conversion efficiency. Here we propose a general framework to study the cost-efficiency trade-off for thermoelectric power generation. A key factor is the optimization of thermoelectric modules together with their heat source and heat sinks. Full electrical and thermal co-optimization yield a simple analytical expression for optimum design. Based on this model, power output per unit mass can be maximized. We show that the fractional area coverage of thermoelectric elements inmore » a module could play a significant role in reducing the cost of power generation systems.« less

  20. Remediation System Design Optimization: Field Demonstration at the Umatilla Army Deport

    NASA Astrophysics Data System (ADS)

    Zheng, C.; Wang, P. P.

    2002-05-01

    Since the early 1980s, many researchers have shown that the simulation-optimization (S/O) approach is superior to the traditional trial-and-error method for designing cost-effective groundwater pump-and-treat systems. However, the application of the S/O approach to real field problems has remained limited. This paper describes the application of a new general simulation-optimization code to optimize an existing pump-and-treat system at the Umatilla Army Depot in Oregon, as part of a field demonstration project supported by the Environmental Security Technology Certification Program (ESTCP). Two optimization formulations were developed to minimize the total capital and operational costs under the current and possibly expanded treatment plant capacities. A third formulation was developed to minimize the total contaminant mass of RDX and TNT remaining in the shallow aquifer by the end of the project duration. For the first two formulations, this study produced an optimal pumping strategy that would achieve the cleanup goal in 4 years with a total cost of 1.66 million US dollars in net present value. For comparison, the existing design in operation was calculated to require 17 years for cleanup with a total cost of 3.83 million US dollars in net present value. Thus, the optimal pumping strategy represents a reduction of 13 years in cleanup time and a reduction of 56.6 percent in the expected total expenditure. For the third formulation, this study identified an optimal dynamic pumping strategy that would reduce the total mass remaining in the shallow aquifer by 89.5 percent compared with that calculated for the existing design. In spite of their intensive computational requirements, this study shows that the global optimization techniques including tabu search and genetic algorithms can be applied successfully to large-scale field problems involving multiple contaminants and complex hydrogeological conditions.

  1. Optimizing resource and energy recovery for materials and waste management

    EPA Science Inventory

    Decisions affecting materials management today are generally based on cost and a presumption of favorable outcomes without an understanding of the environmental tradeoffs. However, there is a growing demand to better understand and quantify the net environmental and energy trade-...

  2. Optimization of a vacuum chamber for vibration measurements.

    PubMed

    Danyluk, Mike; Dhingra, Anoop

    2011-10-01

    A 200 °C high vacuum chamber has been built to improve vibration measurement sensitivity. The optimized design addresses two significant issues: (i) vibration measurements under high vacuum conditions and (ii) use of design optimization tools to reduce operating costs. A test rig consisting of a cylindrical vessel with one access port has been constructed with a welded-bellows assembly used to seal the vessel and enable vibration measurements in high vacuum that are comparable with measurements in air. The welded-bellows assembly provides a force transmissibility of 0.1 or better at 15 Hz excitation under high vacuum conditions. Numerical results based on design optimization of a larger diameter chamber are presented. The general constraints on the new design include material yield stress, chamber first natural frequency, vibration isolation performance, and forced convection heat transfer capabilities over the exterior of the vessel access ports. Operating costs of the new chamber are reduced by 50% compared to a preexisting chamber of similar size and function.

  3. Conception et analyse d'un systeme d'optimisation de plans de vol pour les avions

    NASA Astrophysics Data System (ADS)

    Maazoun, Wissem

    The main objective of this thesis is to develop an optimization method for the preparation of flight plans for aircrafts. The flight plan minimizes all costs associated with the flight. We determine an optimal path for an airplane from a departure airport to a destination airport. The optimal path minimizes the sum of all costs, i.e. the cost of fuel added to the cost of time (wages, rental of the aircraft, arrival delays, etc.). The optimal trajectory is obtained by considering all possible trajectories on a 3D graph (longitude, latitude and altitude) where the altitude levels are separated by 2,000 feet, and by applying a shortest path algorithm. The main task was to accurately compute fuel consumption on each edge of the graph, making sure that each arc has a minimal cost and is covered in a realistic way from the point of view of control, i.e. in accordance with the rules of navigation. To compute the cost of an arc, we take into account weather conditions (temperature, pressure, wind components, etc.). The optimization of each arc is done via the evaluation of an optimum speed that takes all costs into account. Each arc of the graph typically includes several sub-phases of the flight, e.g. altitude change, speed change, and constant speed and altitude. In the initial climb and the final descent phases, the costs are determined by considering altitude changes at constant CAS (Calibrated Air Speed) or constant Mach number. CAS and Mach number are adjusted to minimize cost. The aerodynamic model used is the one proposed by Eurocontrol, which uses the BADA (Base of Aircraft Data) tables. This model is based on the total energy equation that determines the instantaneous fuel consumption. Calculations on each arc are done by solving a system of differential equations that systematically takes all costs into account. To compute the cost of an arc, we must know the time to go through it, which is generally unknown. To have well-posed boundary conditions, we use the horizontal displacement as the independent variable of the system of differential equations. We consider the velocity components of the wind in a 3D system of coordinates to compute the instantaneous ground speed of the aircraft. To consider the cost of time, we use the cost index. The cost of an arc depends on the aircraft mass at the beginning of this arc, and this mass depends on the path. As we consider all possible paths, the cost of an arc must be computed for each trajectory to which it belongs. For a long-distance flight, the number of arcs to be considered in the graph is large and therefore the cost of an arc is typically computed many times. Our algorithm computes the costs of one million arcs in seconds while having a high accuracy. The determination of the optimal trajectory can therefore be done in a short time. To get the optimal path, the mass of the aircraft at the departure point must also be optimal. It is therefore necessary to know the optimal amount of fuel for the journey. The aircraft mass is known only at the arrival point. This mass is the mass of the aircraft including passengers, cargo and reserve fuel mass. The optimal path is determined by calculating backwards, i.e. from the arrival point to the departure point. For the determination of the optimal trajectory, we use an elliptical grid that has focal points at the departure and arrival points. The use of this grid is essential for the construction of a direct and acyclic graph. We use the Bellman-Ford algorithm on a DAG to determine the shortest path. This algorithm is easy to implement and results in short computation times. Our algorithm computes an optimal trajectory with an optimal cost for each arc. Altitude changes are done optimally with respect to the mass of the aircraft and the cost of time. Our algorithm gives the mass, speed, altitude and total cost at any point of the trajectory as well as the optimal profiles of climb and descent. A prototype has been implemented in C. We made simulations of all types of possible arcs and of several complete trajectories to illustrate the behaviour of the algorithm.

  4. General purpose optimization software for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1990-01-01

    The author has developed several general purpose optimization programs over the past twenty years. The earlier programs were developed as research codes and served that purpose reasonably well. However, in taking the formal step from research to industrial application programs, several important lessons have been learned. Among these are the importance of clear documentation, immediate user support, and consistent maintenance. Most important has been the issue of providing software that gives a good, or at least acceptable, design at minimum computational cost. Here, the basic issues developing optimization software for industrial applications are outlined and issues of convergence rate, reliability, and relative minima are discussed. Considerable feedback has been received from users, and new software is being developed to respond to identified needs. The basic capabilities of this software are outlined. A major motivation for the development of commercial grade software is ease of use and flexibility, and these issues are discussed with reference to general multidisciplinary applications. It is concluded that design productivity can be significantly enhanced by the more widespread use of optimization as an everyday design tool.

  5. Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.

    PubMed

    Guo, Tianjiao; Englehardt, James D; Fallon, Howard J

      While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.

  6. Minimizing Project Cost by Integrating Subcontractor Selection Decisions with Scheduling

    NASA Astrophysics Data System (ADS)

    Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata

    2017-10-01

    Subcontracting has been a worldwide practice in the construction industry. It enables the construction enterprises to focus on their core competences and, at the same time, it makes complex project possible to be delivered. Since general contractors bear full responsibility for the works carried out by their subcontractors, it is their task and their risk to select a right subcontractor for a particular work. Although subcontractor management has been admitted to significantly affect the construction project’s performance, current practices and past research deal with subcontractor management and scheduling separately. The proposed model aims to support subcontracting decisions by integrating subcontractor selection with scheduling to enable the general contractor to select the optimal combination of subcontractors and own crews for all work packages of the project. The model allows for the interactions between the subcontractors and their impacts on the overall project performance in terms of cost and, indirectly, time and quality. The model is intended to be used at the general contractor’s bid preparation stage. The authors claim that the subcontracting decisions should be taken in a two-stage process. The first stage is a prequalification - provision of a short list of capable and reliable subcontractors; this stage is not the focus of the paper. The resulting pool of available resources is divided into two subsets: subcontractors, and general contractor’s in-house crews. Once it has been defined, the next stage is to assign them to the work packages that, bound by fixed precedence constraints, form the project’s network diagram. Each package is possible to be delivered by the general contractor’s crew or some of the potential subcontractors, at a specific time and cost. Particular crews and subcontractors can be contracted more than one package, but not at the same time. Other constraints include the predefined project completion date (the project is not allowed to take longer) and maximum total value of subcontracted work. The problem is modelled as a mixed binary linear program that minimizes project cost. It can be solved using universal solvers (e.g. LINGO, AIMMS, CPLEX, MATLAB and Optimization Toolbox, etc.). However, developing a dedicated decision-support tool would facilitate practical applications. To illustrate the idea of the model, the authors present a numerical example to find the optimal set of resources allocated to a project.

  7. DORCA computer program. Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1971-01-01

    The Dynamic Operational Requirements and Cost Analysis Program (DORCA) was written to provide a top level analysis tool for NASA. DORCA relies on a man-machine interaction to optimize results based on external criteria. DORCA relies heavily on outside sources to provide cost information and vehicle performance parameters as the program does not determine these quantities but rather uses them. Given data describing missions, vehicles, payloads, containers, space facilities, schedules, cost values and costing procedures, the program computes flight schedules, cargo manifests, vehicle fleet requirements, acquisition schedules and cost summaries. The program is designed to consider the Earth Orbit, Lunar, Interplanetary and Automated Satellite Programs. A general outline of the capabilities of the program are provided.

  8. Tort reform and "smart" highways : are liability concerns impeding the development of cost-effective intelligent vehicle-highway systems? : final report.

    DOT National Transportation Integrated Search

    1994-01-01

    Highly automated vehicles and highways--which permit higher travel speeds, narrower lanes, smaller headways between vehicles, and optimized routing (collectively called intelligent vehicle-highway systems or IVHS)-- have been generally conceded to be...

  9. Cost-effectiveness on a local level: whether and when to adopt a new technology.

    PubMed

    Woertman, Willem H; Van De Wetering, Gijs; Adang, Eddy M M

    2014-04-01

    Cost-effectiveness analysis has become a widely accepted tool for decision making in health care. The standard textbook cost-effectiveness analysis focuses on whether to make the switch from an old or common practice technology to an innovative technology, and in doing so, it takes a global perspective. In this article, we are interested in a local perspective, and we look at the questions of whether and when the switch from old to new should be made. A new approach to cost-effectiveness from a local (e.g., a hospital) perspective, by means of a mathematical model for cost-effectiveness that explicitly incorporates time, is proposed. A decision rule is derived for establishing whether a new technology should be adopted, as well as a general rule for establishing when it pays to postpone adoption by 1 more period, and a set of decision rules that can be used to determine the optimal timing of adoption. Finally, a simple example is presented to illustrate our model and how it leads to optimal decision making in a number of cases.

  10. Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis.

    PubMed

    Zeng, Guangming; Jiang, Ru; Huang, Guohe; Xu, Min; Li, Jianbing

    2007-01-01

    This paper describes an innovative systematic approach, namely hierarchy grey relational analysis for optimal selection of wastewater treatment alternatives, based on the application of analytic hierarchy process (AHP) and grey relational analysis (GRA). It can be applied for complicated multicriteria decision-making to obtain scientific and reasonable results. The effectiveness of this approach was verified through a real case study. Four wastewater treatment alternatives (A(2)/O, triple oxidation ditch, anaerobic single oxidation ditch and SBR) were evaluated and compared against multiple economic, technical and administrative performance criteria, including capital cost, operation and maintenance (O and M) cost, land area, removal of nitrogenous and phosphorous pollutants, sludge disposal effect, stability of plant operation, maturity of technology and professional skills required for O and M. The result illustrated that the anaerobic single oxidation ditch was the optimal scheme and would obtain the maximum general benefits for the wastewater treatment plant to be constructed.

  11. Systematic Sensor Selection Strategy (S4) User Guide

    NASA Technical Reports Server (NTRS)

    Sowers, T. Shane

    2012-01-01

    This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.

  12. A Bayesian multi-stage cost-effectiveness design for animal studies in stroke research

    PubMed Central

    Cai, Chunyan; Ning, Jing; Huang, Xuelin

    2017-01-01

    Much progress has been made in the area of adaptive designs for clinical trials. However, little has been done regarding adaptive designs to identify optimal treatment strategies in animal studies. Motivated by an animal study of a novel strategy for treating strokes, we propose a Bayesian multi-stage cost-effectiveness design to simultaneously identify the optimal dose and determine the therapeutic treatment window for administrating the experimental agent. We consider a non-monotonic pattern for the dose-schedule-efficacy relationship and develop an adaptive shrinkage algorithm to assign more cohorts to admissible strategies. We conduct simulation studies to evaluate the performance of the proposed design by comparing it with two standard designs. These simulation studies show that the proposed design yields a significantly higher probability of selecting the optimal strategy, while it is generally more efficient and practical in terms of resource usage. PMID:27405325

  13. Optimized Quasi-Interpolators for Image Reconstruction.

    PubMed

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  14. Towards a hierarchical optimization modeling framework for ...

    EPA Pesticide Factsheets

    Background:Bilevel optimization has been recognized as a 2-player Stackelberg game where players are represented as leaders and followers and each pursue their own set of objectives. Hierarchical optimization problems, which are a generalization of bilevel, are especially difficult because the optimization is nested, meaning that the objectives of one level depend on solutions to the other levels. We introduce a hierarchical optimization framework for spatially targeting multiobjective green infrastructure (GI) incentive policies under uncertainties related to policy budget, compliance, and GI effectiveness. We demonstrate the utility of the framework using a hypothetical urban watershed, where the levels are characterized by multiple levels of policy makers (e.g., local, regional, national) and policy followers (e.g., landowners, communities), and objectives include minimization of policy cost, implementation cost, and risk; reduction of combined sewer overflow (CSO) events; and improvement in environmental benefits such as reduced nutrient run-off and water availability. Conclusions: While computationally expensive, this hierarchical optimization framework explicitly simulates the interaction between multiple levels of policy makers (e.g., local, regional, national) and policy followers (e.g., landowners, communities) and is especially useful for constructing and evaluating environmental and ecological policy. Using the framework with a hypothetical urba

  15. Design and analysis of electricity markets

    NASA Astrophysics Data System (ADS)

    Sioshansi, Ramteen Mehr

    Restructured competitive electricity markets rely on designing market-based mechanisms which can efficiently coordinate the power system and minimize the exercise of market power. This dissertation is a series of essays which develop and analyze models of restructured electricity markets. Chapter 2 studies the incentive properties of a co-optimized market for energy and reserves that pays reserved generators their implied opportunity cost---which is the difference between their stated energy cost and the market-clearing price for energy. By analyzing the market as a competitive direct revelation mechanism we examine the properties of efficient equilibria and demonstrate that generators have incentives to shade their stated costs below actual costs. We further demonstrate that the expected energy payments of our mechanism is less than that in a disjoint market for energy only. Chapter 3 is an empirical validation of a supply function equilibrium (SFE) model. By comparing theoretically optimal supply functions and actual generation offers into the Texas spot balancing market, we show the SFE to fit the actual behavior of the largest generators in market. This not only serves to validate the model, but also demonstrates the extent to which firms exercise market power. Chapters 4 and 5 examine equity, incentive, and efficiency issues in the design of non-convex commitment auctions. We demonstrate that different near-optimal solutions to a central unit commitment problem which have similar-sized optimality gaps will generally yield vastly different energy prices and payoffs to individual generators. Although solving the mixed integer program to optimality will overcome such issues, we show that this relies on achieving optimality of the commitment---which may not be tractable for large-scale problems within the allotted timeframe. We then simulate and compare a competitive benchmark for a market with centralized and self commitment in order to bound the efficiency losses stemming from coordination losses (cost of anarchy) in a decentralized market.

  16. Risk-based planning analysis for a single levee

    NASA Astrophysics Data System (ADS)

    Hui, Rui; Jachens, Elizabeth; Lund, Jay

    2016-04-01

    Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.

  17. Effect of land tenure and stakeholders attitudes on optimization of conservation practices in agricultural watersheds

    NASA Astrophysics Data System (ADS)

    Piemonti, A. D.; Babbar-Sebens, M.; Luzar, E. J.

    2012-12-01

    Modeled watershed management plans have become valuable tools for evaluating the effectiveness and impacts of conservation practices on hydrologic processes in watersheds. In multi-objective optimization approaches, several studies have focused on maximizing physical, ecological, or economic benefits of practices in a specific location, without considering the relationship between social systems and social attitudes on the overall optimality of the practice at that location. For example, objectives that have been commonly used in spatial optimization of practices are economic costs, sediment loads, nutrient loads and pesticide loads. Though the benefits derived from these objectives are generally oriented towards community preferences, they do not represent attitudes of landowners who might operate their land differently than their neighbors (e.g. farm their own land or rent the land to someone else) and might have different social/personal drivers that motivate them to adopt the practices. In addition, a distribution of such landowners could exist in the watershed, leading to spatially varying preferences to practices. In this study we evaluated the effect of three different land tenure types on the spatial-optimization of conservation practices. To perform the optimization, we used a uniform distribution of land tenure type and a spatially varying distribution of land tenure type. Our results show that for a typical Midwestern agricultural watershed, the most optimal solutions (i.e. highest benefits for minimum economic costs) found were for a uniform distribution of landowners who operate their own land. When a different land-tenure was used for the watershed, the optimized alternatives did not change significantly for nitrates reduction benefits and sediment reduction benefits, but were attained at economic costs much higher than the costs of the landowner who farms her/his own land. For example, landowners who rent to cash-renters would have to spend ~120% higher costs than landowners who operate their own land, to attain the same benefits. We also tested the effect of different social attitudes on the final preferences of the optimized alternatives and its consequences over the total effectiveness of the standard optimization approaches. The results suggest that, for example, when practices were removed from the system due to landowners' attitudes driven by economic profits, then the modified alternatives experienced a decrease in nitrates reduction by 2-50%, and decrease in peak flow reductions by 11-98 %, and decrease in sediments reduction by 20-77%.

  18. Has it become increasingly expensive to follow a nutritious diet? Insights from a new price index for nutritious diets in Sweden 1980-2012.

    PubMed

    Håkansson, Andreas

    2015-01-01

    Health-related illnesses such as obesity and diabetes continue to increase, particularly in groups of low socioeconomic status. The increasing cost of nutritious food has been suggested as an explanation. To construct a price index describing the cost of a diet adhering to nutritional recommendations for a rational and knowledgeable consumer and, furthermore, to investigate which nutrients have become more expensive to obtain over time. Linear programming and goal programming were used to calculate two optimal and nutritious diets for each year in the interval under different assumptions. The first model describes the rational choice of a cost-minimizing consumer; the second, the choice of a consumer trying to deviate as little as possible from average consumption. Shadow price analysis was used to investigate how nutrients contribute to the diet cost. The cost of a diet adhering to nutritional recommendations has not increased more than general food prices in Sweden between 1980 and 2012. However, following nutrient recommendations increases the diet cost even for a rational consumer, particularly for vitamin D, iron, and selenium. The cost of adhering to the vitamin D recommendation has increased faster than the general food prices. Not adhering to recommendations (especially those for vitamin D) offers an opportunity for consumers to lower the diet cost. However, the cost of nutritious diets has not increased more than the cost of food in general between 1980 and 2012 in Sweden.

  19. The Michigan Surgical Home and Optimization Program is a scalable model to improve care and reduce costs.

    PubMed

    Englesbe, Michael J; Grenda, Dane R; Sullivan, June A; Derstine, Brian A; Kenney, Brooke N; Sheetz, Kyle H; Palazzolo, William C; Wang, Nicholas C; Goulson, Rebecca L; Lee, Jay S; Wang, Stewart C

    2017-06-01

    The Michigan Surgical Home and Optimization Program is a structured, home-based, preoperative training program targeting physical, nutritional, and psychological guidance. The purpose of this study was to determine if participation in this program was associated with reduced hospital duration of stay and health care costs. We conducted a retrospective, single center, cohort study evaluating patients who participated in the Michigan Surgical Home and Optimization Program and subsequently underwent major elective general and thoracic operative care between June 2014 and December 2015. Propensity score matching was used to match program participants to a control group who underwent operative care prior to program implementation. Primary outcome measures were hospital duration of stay and payer costs. Multivariate regression was used to determine the covariate-adjusted effect of program participation. A total of 641 patients participated in the program; 82% were actively engaged in the program, recording physical activity at least 3 times per week for the majority of the program; 182 patients were propensity matched to patients who underwent operative care prior to program implementation. Multivariate analysis demonstrated that participation in the Michigan Surgical Home and Optimization Program was associated with a 31% reduction in hospital duration of stay (P < .001) and 28% lower total costs (P < .001) after adjusting for covariates. A home-based, preoperative training program decreased hospital duration of stay, lowered costs of care, and was well accepted by patients. Further efforts will focus on broader implementation and linking participation to postoperative complications and rigorous patient-reported outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A thermal vacuum test optimization procedure

    NASA Technical Reports Server (NTRS)

    Kruger, R.; Norris, H. P.

    1979-01-01

    An analytical model was developed that can be used to establish certain parameters of a thermal vacuum environmental test program based on an optimization of program costs. This model is in the form of a computer program that interacts with a user insofar as the input of certain parameters. The program provides the user a list of pertinent information regarding an optimized test program and graphs of some of the parameters. The model is a first attempt in this area and includes numerous simplifications. The model appears useful as a general guide and provides a way for extrapolating past performance to future missions.

  1. Pareto-optimal phylogenetic tree reconciliation

    PubMed Central

    Libeskind-Hadas, Ran; Wu, Yi-Chieh; Bansal, Mukul S.; Kellis, Manolis

    2014-01-01

    Motivation: Phylogenetic tree reconciliation is a widely used method for reconstructing the evolutionary histories of gene families and species, hosts and parasites and other dependent pairs of entities. Reconciliation is typically performed using maximum parsimony, in which each evolutionary event type is assigned a cost and the objective is to find a reconciliation of minimum total cost. It is generally understood that reconciliations are sensitive to event costs, but little is understood about the relationship between event costs and solutions. Moreover, choosing appropriate event costs is a notoriously difficult problem. Results: We address this problem by giving an efficient algorithm for computing Pareto-optimal sets of reconciliations, thus providing the first systematic method for understanding the relationship between event costs and reconciliations. This, in turn, results in new techniques for computing event support values and, for cophylogenetic analyses, performing robust statistical tests. We provide new software tools and demonstrate their use on a number of datasets from evolutionary genomic and cophylogenetic studies. Availability and implementation: Our Python tools are freely available at www.cs.hmc.edu/∼hadas/xscape. Contact: mukul@engr.uconn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24932009

  2. Report: Improving Nationwide Effectiveness of Pump-and-Treat Remedies Requires Sustained and Focused Action to Realize Benefits

    EPA Pesticide Factsheets

    Report #2003-P-000006, March 27, 2003. We found that, generally, the pump-and-treat optimization project has produced valuable information and recommendations for improvement regarding the cost and performance of Superfund-financed pump-and-treat systems.

  3. Ideal AFROC and FROC observers.

    PubMed

    Khurd, Parmeshwar; Liu, Bin; Gindi, Gene

    2010-02-01

    Detection of multiple lesions in images is a medically important task and free-response receiver operating characteristic (FROC) analyses and its variants, such as alternative FROC (AFROC) analyses, are commonly used to quantify performance in such tasks. However, ideal observers that optimize FROC or AFROC performance metrics have not yet been formulated in the general case. If available, such ideal observers may turn out to be valuable for imaging system optimization and in the design of computer aided diagnosis techniques for lesion detection in medical images. In this paper, we derive ideal AFROC and FROC observers. They are ideal in that they maximize, amongst all decision strategies, the area, or any partial area, under the associated AFROC or FROC curve. Calculation of observer performance for these ideal observers is computationally quite complex. We can reduce this complexity by considering forms of these observers that use false positive reports derived from signal-absent images only. We also consider a Bayes risk analysis for the multiple-signal detection task with an appropriate definition of costs. A general decision strategy that minimizes Bayes risk is derived. With particular cost constraints, this general decision strategy reduces to the decision strategy associated with the ideal AFROC or FROC observer.

  4. Application of the stepwise focusing method to optimize the cost-effectiveness of genome-wide association studies with limited research budgets for genotyping and phenotyping.

    PubMed

    Ohashi, J; Clark, A G

    2005-05-01

    The recent cataloguing of a large number of SNPs enables us to perform genome-wide association studies for detecting common genetic variants associated with disease. Such studies, however, generally have limited research budgets for genotyping and phenotyping. It is therefore necessary to optimize the study design by determining the most cost-effective numbers of SNPs and individuals to analyze. In this report we applied the stepwise focusing method, with two-stage design, developed by Satagopan et al. (2002) and Saito & Kamatani (2002), to optimize the cost-effectiveness of a genome-wide direct association study using a transmission/disequilibrium test (TDT). The stepwise focusing method consists of two steps: a large number of SNPs are examined in the first focusing step, and then all the SNPs showing a significant P-value are tested again using a larger set of individuals in the second focusing step. In the framework of optimization, the numbers of SNPs and families and the significance levels in the first and second steps were regarded as variables to be considered. Our results showed that the stepwise focusing method achieves a distinct gain of power compared to a conventional method with the same research budget.

  5. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  6. Impacts of Valuing Resilience on Cost-Optimal PV and Storage Systems for Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laws, Nicholas D; Anderson, Katherine H; DiOrio, Nicholas A

    Decreasing electric grid reliability in the US, along with increasing severe weather events, have greatly increased interest in resilient energy systems. Few studies have included the value of resilience when sizing PV and Battery Energy Storage Systems (BESS), and none have included the cost to island a PV and BESS, grid-connected costs and benefits, and the value of resilience. This work presents a novel method for incorporating the value of resilience provided by a PV and BESS into a techno-economic optimization model. Including the value of resilience in the design of a cost-optimal PV and BESS generally increases the systemmore » capacities, and in some cases makes a system economical where it was not before. For example, for a large hotel in Anaheim, CA no system is economical without resilience valued; however, with a $5317/hr value of resilience a 363 kW and 60 kWh solar and BESS provides a net present value of $50,000. Lastly, we discuss the effect of the 'islandable premium', which must be balanced against the benefits from serving critical loads during outages. Case studies show that the islandable premium can vary widely, which highlights the necessity for case-by-case solutions in a rapidly developing market.« less

  7. SimWIND: A Geospatial Infrastructure Model for Wind Energy Production and Transmission

    NASA Astrophysics Data System (ADS)

    Middleton, R. S.; Phillips, B. R.; Bielicki, J. M.

    2009-12-01

    Wind is a clean, enduring energy resource with a capacity to satisfy 20% or more of the electricity needs in the United States. A chief obstacle to realizing this potential is the general paucity of electrical transmission lines between promising wind resources and primary load centers. Successful exploitation of this resource will therefore require carefully planned enhancements to the electric grid. To this end, we present the model SimWIND for self-consistent optimization of the geospatial arrangement and cost of wind energy production and transmission infrastructure. Given a set of wind farm sites that satisfy meteorological viability and stakeholder interest, our model simultaneously determines where and how much electricity to produce, where to build new transmission infrastructure and with what capacity, and where to use existing infrastructure in order to minimize the cost for delivering a given amount of electricity to key markets. Costs and routing of transmission line construction take into account geographic and social factors, as well as connection and delivery expenses (transformers, substations, etc.). We apply our model to Texas and consider how findings complement the 2008 Electric Reliability Council of Texas (ERCOT) Competitive Renewable Energy Zones (CREZ) Transmission Optimization Study. Results suggest that integrated optimization of wind energy infrastructure and cost using SimWIND could play a critical role in wind energy planning efforts.

  8. A new approach to approximating the linear quadratic optimal control law for hereditary systems with control delays

    NASA Technical Reports Server (NTRS)

    Milman, M. H.

    1985-01-01

    A factorization approach is presented for deriving approximations to the optimal feedback gain for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the feedback kernels.

  9. Studying Treatment Intensity: Lessons from Two Preliminary Studies

    ERIC Educational Resources Information Center

    Neil, Nicole; Jones, Emily A.

    2015-01-01

    Determining how best to meet the needs of learners with Down syndrome requires an approach to intervention delivered at some level of intensity. How treatment intensity affects learner acquisition, maintenance, and generalization of skills can help optimize the efficiency and cost-effectiveness of interventions. There is a growing body of research…

  10. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1999-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-2110 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition, several modeling issues for the design of shells of revolution were studied.

  11. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1998-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-1808 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition several modeling issues for the design of shells of revolution were studied.

  12. Feasibility of Floating Platform Systems for Wind Turbines: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musial, W.; Butterfield, S.; Boone, A.

    This paper provides a general technical description of several types of floating platforms for wind turbines. Platform topologies are classified into multiple- or single-turbine floaters and by mooring method. Platforms using catenary mooring systems are contrasted to vertical mooring systems and the advantages and disadvantages are discussed. Specific anchor types are described in detail. A rough cost comparison is performed for two different platform architectures using a generic 5-MW wind turbine. One platform is a Dutch study of a tri-floater platform using a catenary mooring system, and the other is a mono-column tension-leg platform developed at the National Renewable Energymore » Laboratory. Cost estimates showed that single unit production cost is $7.1 M for the Dutch tri-floater, and $6.5 M for the NREL TLP concept. However, value engineering, multiple unit series production, and platform/turbine system optimization can lower the unit platform costs to $4.26 M and $2.88 M, respectively, with significant potential to reduce cost further with system optimization. These foundation costs are within the range necessary to bring the cost of energy down to the DOE target range of $0.05/kWh for large-scale deployment of offshore floating wind turbines.« less

  13. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  14. Unpredictable food supply modifies costs of reproduction and hampers individual optimization.

    PubMed

    Török, János; Hegyi, Gergely; Tóth, László; Könczey, Réka

    2004-11-01

    Investment into the current reproductive attempt is thought to be at the expense of survival and/or future reproduction. Individuals are therefore expected to adjust their decisions to their physiological state and predictable aspects of environmental quality. The main predictions of the individual optimization hypothesis for bird clutch sizes are: (1) an increase in the number of recruits with an increasing number of eggs in natural broods, with no corresponding impairment of parental survival or future reproduction, and (2) a decrease in the fitness of parents in response to both negative and positive brood size manipulation, as a result of a low number of recruits, poor future reproduction of parents, or both. We analysed environmental influences on costs and optimization of reproduction on 6 years of natural and experimentally manipulated broods in a Central European population of the collared flycatcher. Based on dramatic differences in caterpillar availability, we classified breeding seasons as average and rich food years. The categorization was substantiated by the majority of present and future fitness components of adults and offspring. Neither observational nor experimental data supported the individual optimization hypothesis, in contrast to a Scandinavian population of the species. The quality of fledglings deteriorated, and the number of recruits did not increase with natural clutch size. Manipulation revealed significant costs of reproduction to female parents in terms of future reproductive potential. However, the influence of manipulation on recruitment was linear, with no significant polynomial effect. The number of recruits increased with manipulation in rich food years and tended to decrease in average years, so control broods did not recruit more young than manipulated broods in any of the year types. This indicates that females did not optimize their clutch size, and that they generally laid fewer eggs than optimal in rich food years. Mean yearly clutch size did not follow food availability, which suggests that females cannot predict food supply of the brood-rearing period at the beginning of the season. This lack of information on future food conditions seems to prevent them from accurately estimating their optimal clutch size for each season. Our results suggest that individual optimization may not be a general pattern even within a species, and alternative mechanisms are needed to explain clutch size variation.

  15. Optimizing the construction of devices to control inaccesible surfaces - case study

    NASA Astrophysics Data System (ADS)

    Niţu, E. L.; Costea, A.; Iordache, M. D.; Rizea, A. D.; Babă, Al

    2017-10-01

    The modern concept for the evolution of manufacturing systems requires multi-criteria optimization of technological processes and equipments, prioritizing associated criteria according to their importance. Technological preparation of the manufacturing can be developed, depending on the volume of production, to the limit of favourable economical effects related to the recovery of the costs for the design and execution of the technological equipment. Devices, as subsystems of the technological system, in the general context of modernization and diversification of machines, tools, semi-finished products and drives, are made in a multitude of constructive variants, which in many cases do not allow their identification, study and improvement. This paper presents a case study in which the multi-criteria analysis of some structures, based on a general optimization method, of novelty character, is used in order to determine the optimal construction variant of a control device. The rational construction of the control device confirms that the optimization method and the proposed calculation methods are correct and determine a different system configuration, new features and functions, and a specific method of working to control inaccessible surfaces.

  16. Work–Life Balance: History, Costs, and Budgeting for Balance

    PubMed Central

    Raja, Siva; Stein, Sharon L.

    2014-01-01

    The concept and difficulties of work–life balance are not unique to surgeons, but professional responsibilities make maintaining a work–life balance difficult. Consequences of being exclusively career focused include burn out, physical, and mental ailments. In addition, physician burn out may hinder optimal patient care and incur significant costs on health care in general. Assessing current uses of time, allocating goals catered to an individual surgeon, and continual self-assessment may help balance time, and ideally will help prevent burn out. PMID:25067921

  17. Guidebook for solar process-heat applications

    NASA Astrophysics Data System (ADS)

    Fazzolare, R.; Mignon, G.; Campoy, L.; Luttmann, F.

    1981-01-01

    The potential for solar process heat in Arizona and some of the general technical aspects of solar, such as insolation, siting, and process analysis are explored. Major aspects of a solar plant design are presented. Collectors, storage, and heat exchange are discussed. Reducing hardware costs to annual dollar benefits is also discussed. Rate of return, cash flow, and payback are discussed as they relate to solar systems. Design analysis procedures are presented. The design cost optimization techniques using a yearly computer simulation of a solar process operation is demonstrated.

  18. Work-life balance: history, costs, and budgeting for balance.

    PubMed

    Raja, Siva; Stein, Sharon L

    2014-06-01

    The concept and difficulties of work-life balance are not unique to surgeons, but professional responsibilities make maintaining a work-life balance difficult. Consequences of being exclusively career focused include burn out, physical, and mental ailments. In addition, physician burn out may hinder optimal patient care and incur significant costs on health care in general. Assessing current uses of time, allocating goals catered to an individual surgeon, and continual self-assessment may help balance time, and ideally will help prevent burn out.

  19. Dikin-type algorithms for dextrous grasping force optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buss, M.; Faybusovich, L.; Moore, J.B.

    1998-08-01

    One of the central issues in dextrous robotic hand grasping is to balance external forces acting on the object and at the same time achieve grasp stability and minimum grasping effort. A companion paper shows that the nonlinear friction-force limit constraints on grasping forces are equivalent to the positive definiteness of a certain matrix subject to linear constraints. Further, compensation of the external object force is also a linear constraint on this matrix. Consequently, the task of grasping force optimization can be formulated as a problem with semidefinite constraints. In this paper, two versions of strictly convex cost functions, onemore » of them self-concordant, are considered. These are twice-continuously differentiable functions that tend to infinity at the boundary of possible definiteness. For the general class of such cost functions, Dikin-type algorithms are presented. It is shown that the proposed algorithms guarantee convergence to the unique solution of the semidefinite programming problem associated with dextrous grasping force optimization. Numerical examples demonstrate the simplicity of implementation, the good numerical properties, and the optimality of the approach.« less

  20. Implementation and Operational Research: A Cost-Effective, Clinically Actionable Strategy for Targeting HIV Preexposure Prophylaxis to High-Risk Men Who Have Sex With Men.

    PubMed

    Ross, Eric L; Cinti, Sandro K; Hutton, David W

    2016-07-01

    Preexposure prophylaxis (PrEP) is effective at preventing HIV infection among men who have sex with men (MSM), but there is uncertainty about how to identify high-risk MSM who should receive PrEP. We used a mathematical model to assess the cost-effectiveness of using the HIV Incidence Risk Index for MSM (HIRI-MSM) questionnaire to target PrEP to high-risk MSM. We simulated strategies of no PrEP, PrEP available to all MSM, and eligibility thresholds set to HIRI-MSM scores between 5 and 45, in increments of 5 (where a higher score predicts greater HIV risk). Based on the iPrEx, IPERGAY, and PROUD trials, we evaluated PrEP efficacies from 44% to 86% and annual costs from $5900 to 8700. We designate strategies with incremental cost-effectiveness ratio (ICER) ≤$100,000/quality-adjusted life-year (QALY) as "cost-effective." Over 20 years, making PrEP available to all MSM is projected to prevent 33.5% of new HIV infections, with an ICER of $1,474,000/QALY. Increasing the HIRI-MSM score threshold reduces the prevented infections, but improves cost-effectiveness. A threshold score of 25 is projected to be optimal (most QALYs gained while still being cost-effective) over a wide range of realistic PrEP efficacies and costs. At low cost and high efficacy (IPERGAY), thresholds of 15 or 20 are optimal across a range of other input assumptions; at high cost and low efficacy (iPrEx), 25 or 30 are generally optimal. The HIRI-MSM provides a clinically actionable means of guiding PrEP use. Using a score of 25 to determine PrEP eligibility could facilitate cost-effective use of PrEP among high-risk MSM who will benefit from it most.

  1. Airfoil optimization for unsteady flows with application to high-lift noise reduction

    NASA Astrophysics Data System (ADS)

    Rumpfkeil, Markus Peer

    The use of steady-state aerodynamic optimization methods in the computational fluid dynamic (CFD) community is fairly well established. In particular, the use of adjoint methods has proven to be very beneficial because their cost is independent of the number of design variables. The application of numerical optimization to airframe-generated noise, however, has not received as much attention, but with the significant quieting of modern engines, airframe noise now competes with engine noise. Optimal control techniques for unsteady flows are needed in order to be able to reduce airframe-generated noise. In this thesis, a general framework is formulated to calculate the gradient of a cost function in a nonlinear unsteady flow environment via the discrete adjoint method. The unsteady optimization algorithm developed in this work utilizes a Newton-Krylov approach since the gradient-based optimizer uses the quasi-Newton method BFGS, Newton's method is applied to the nonlinear flow problem, GMRES is used to solve the resulting linear problem inexactly, and last but not least the linear adjoint problem is solved using Bi-CGSTAB. The flow is governed by the unsteady two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized using structured grids and a finite difference approach. The effectiveness of the unsteady optimization algorithm is demonstrated by applying it to several problems of interest including shocktubes, pulses in converging-diverging nozzles, rotating cylinders, transonic buffeting, and an unsteady trailing-edge flow. In order to address radiated far-field noise, an acoustic wave propagation program based on the Ffowcs Williams and Hawkings (FW-H) formulation is implemented and validated. The general framework is then used to derive the adjoint equations for a novel hybrid URANS/FW-H optimization algorithm in order to be able to optimize the shape of airfoils based on their calculated far-field pressure fluctuations. Validation and application results for this novel hybrid URANS/FW-H optimization algorithm show that it is possible to optimize the shape of an airfoil in an unsteady flow environment to minimize its radiated far-field noise while maintaining good aerodynamic performance.

  2. An effective and comprehensive model for optimal rehabilitation of separate sanitary sewer systems.

    PubMed

    Diogo, António Freire; Barros, Luís Tiago; Santos, Joana; Temido, Jorge Santos

    2018-01-15

    In the field of rehabilitation of separate sanitary sewer systems, a large number of technical, environmental, and economic aspects are often relevant in the decision-making process, which may be modelled as a multi-objective optimization problem. Examples are those related with the operation and assessment of networks, optimization of structural, hydraulic, sanitary, and environmental performance, rehabilitation programmes, and execution works. In particular, the cost of investment, operation and maintenance needed to reduce or eliminate Infiltration from the underground water table and Inflows of storm water surface runoff (I/I) using rehabilitation techniques or related methods can be significantly lower than the cost of transporting and treating these flows throughout the lifespan of the systems or period studied. This paper presents a comprehensive I/I cost-benefit approach for rehabilitation that explicitly considers all elements of the systems and shows how the approximation is incorporated as an objective function in a general evolutionary multi-objective optimization model. It takes into account network performance and wastewater treatment costs, average values of several input variables, and rates that can reflect the adoption of different predictable or limiting scenarios. The approach can be used as a practical and fast tool to support decision-making in sewer network rehabilitation in any phase of a project. The fundamental aspects, modelling, implementation details and preliminary results of a two-objective optimization rehabilitation model using a genetic algorithm, with a second objective function related to the structural condition of the network and the service failure risk, are presented. The basic approach is applied to three real world cases studies of sanitary sewerage systems in Coimbra and the results show the simplicity, suitability, effectiveness, and usefulness of the approximation implemented and of the objective function proposed. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    NASA Astrophysics Data System (ADS)

    Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles

    2008-12-01

    We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  4. Body Condition Scores and Evaluation of Feeding Habits of Dogs and Cats at a Low Cost Veterinary Clinic and a General Practice

    PubMed Central

    2016-01-01

    This study assessed body condition scores (BCS) and feeding habits for dogs and cats. Eighty-six cats and 229 dogs (and their owners) were enrolled from 2 clinics: a low cost clinic (n = 149) and a general practice (n = 166). BCS and body weight were recorded. Owners completed a survey which included animal age, sex, and breed; owner demographics; and feeding practices (e.g., diet, rationale for feeding practices). Owners from the low cost clinic had a significantly lower income (P < 0.001) and education (P < 0.001) compared to those from the general practice. Animals from the low cost clinic were younger (P < 0.001) and dogs were less likely to be neutered (P < 0.001). Overweight prevalence was 55% overall (P = 0.083), with a significantly higher prevalence in the general practice for cats (44% versus 66%; P = 0.046), but not for dogs (58% versus 53%; P = 0.230). Multivariate analysis showed that only neuter status was significantly associated with BCS (P = 0.004). Veterinarians were the most common source of nutritional information, though lack of accurate nutrition knowledge was common among all participants. These findings support the need for enhanced communication about optimal BCS and nutrition regardless of socioeconomic status. PMID:27722198

  5. Designing Industrial Networks Using Ecological Food Web Metrics.

    PubMed

    Layton, Astrid; Bras, Bert; Weissburg, Marc

    2016-10-18

    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  6. Socially optimal drainage system and agricultural biodiversity: a case study for Finnish landscape.

    PubMed

    Saikkonen, Liisa; Herzon, Irina; Ollikainen, Markku; Lankoski, Jussi

    2014-12-15

    This paper examines the socially optimal drainage choice (surface/subsurface) for agricultural crop cultivation in a landscape with different land qualities (fertilities) when private profits and nutrient runoff damages are taken into account. We also study the measurable social costs to increase biodiversity by surface drainage when the locations of the surface-drained areas in a landscape affect the provided biodiversity. We develop a general theoretical model and apply it to empirical data from Finnish agriculture. We find that for low land qualities the measurable social returns are higher to surface drainage than to subsurface drainage, and that the profitability of subsurface drainage increases along with land quality. The measurable social costs to increase biodiversity by surface drainage under low land qualities are negative. For higher land qualities, these costs depend on the land quality and on the biodiversity impacts. Biodiversity conservation plans for agricultural landscapes should focus on supporting surface drainage systems in areas where the measurable social costs to increase biodiversity are negative or lowest. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Optimization of a Future RLV Business Case using Multiple Strategic Market Prices

    NASA Astrophysics Data System (ADS)

    Charania, A.; Olds, J. R.

    2002-01-01

    There is a lack of depth in the current paradigm of conceptual level economic models used to evaluate the value and viability of future capital projects such as a commercial reusable launch vehicle (RLV). Current modeling methods assume a single price is charged to all customers, public or private, in order to optimize the economic metrics of interest. This assumption may not be valid given the different utility functions for space services of public and private entities. The government's requirements are generally more inflexible than its commercial counterparts. A government's launch schedules are much more rigid, choices of international launch services restricted, and launch specifications generally more stringent as well as numerous. These requirements generally make the government's demand curve more inelastic. Subsequently, a launch vehicle provider will charge a higher price (launch price per kg) to the government and may obtain a higher level of financial profit compared to an equivalent a commercial payload. This profit is not a sufficient condition to enable RLV development by itself but can help in making the financial situation slightly better. An RLV can potentially address multiple payload markets; each market has a different price elasticity of demand for both the commercial and government customer. Thus, a more resilient examination of the economic landscape requires optimization of multiple prices in which each price affects a different demand curve. Such an examination is performed here using the Cost and Business Analysis Module (CABAM), an MS-Excel spreadsheet-based model that attempts to couple both the demand and supply for space transportation services in the future. The demand takes the form of market assumptions (both near-term and far-term) and the supply comes from user-defined vehicles that are placed into the model. CABAM represents RLV projects as commercial endeavors with the possibility to model the effects of government contribution, tax-breaks, loan guarantees, etc. The optimization performed here is for a 3rd Generation RLV program. The economic metric being optimized (maximized) is Net Present Value (NPV) based upon a given company financial structure and cost of capital assumptions. Such an optimization process demands more sophisticated optimizers and can result in non-unique solutions/local minimums if using gradient-based optimization. Domain spanning/evolutionary algorithms are used to obtain the optimized solution in the design space. These capabilities generally increase model calculation time but incorporate realistic pricing portfolios than just assuming one unified price for all launch markets. This analysis is conducted with CABAM running in Phoenix Integration's ModelCenter 4.0 collaborative design environment using the SpaceWorks Engineering, Inc. (SEI) OptWorks suite of optimization components.

  8. Optimal numbers of matings: the conditional balance between benefits and costs of mating for females of a nuptial gift-giving spider.

    PubMed

    Toft, S; Albo, M J

    2015-02-01

    In species where females gain a nutritious nuptial gift during mating, the balance between benefits and costs of mating may depend on access to food. This means that there is not one optimal number of matings for the female but a range of optimal mating numbers. With increasing food availability, the optimal number of matings for a female should vary from the number necessary only for fertilization of her eggs to the number needed also for producing these eggs. In three experimental series, the average number of matings for females of the nuptial gift-giving spider Pisaura mirabilis before egg sac construction varied from 2 to 16 with food-limited females generally accepting more matings than well-fed females. Minimal level of optimal mating number for females at satiation feeding conditions was predicted to be 2-3; in an experimental test, the median number was 2 (range 0-4). Multiple mating gave benefits in terms of increased fecundity and increased egg hatching success up to the third mating, and it had costs in terms of reduced fecundity, reduced egg hatching success after the third mating, and lower offspring size. The level of polyandry seems to vary with the female optimum, regulated by a satiation-dependent resistance to mating, potentially leaving satiated females in lifelong virginity. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  9. Two additional principles for determining which species to monitor.

    PubMed

    Wilson, Howard B; Rhodes, Jonathan R; Possingham, Hugh P

    2015-11-01

    Monitoring to detect population declines is widespread, but also costly. There is, consequently, a need to optimize monitoring to maximize cost-effectiveness. Here we develop a quantitative decision analysis framework for how to optimally allocate resources for monitoring among species. By keeping the framework simple, we analytically establish two new principles about which species are optimal to monitor for detecting declines: (1) those that lie on the boundary between species being allocated resources for conservation action and species that are not and (2) those with the greatest uncertainty in whether they are declining. These two principles are in addition to other factors that are also important in monitoring decisions, such as complementarity. We demonstrate the efficacy of these principles when other factors are not present, and show how the two principles can be combined. This analysis demonstrates that the most cost-effective species to monitor are ones where the information gained from monitoring is most likely to change the allocation of funds for action, not necessarily the most vulnerable or endangered. We suggest these results are general and apply to all ecological monitoring, not just of biological species: monitoring and information are only valuable when they are likely to change how people act.

  10. Maximizing phylogenetic diversity in biodiversity conservation: Greedy solutions to the Noah's Ark problem.

    PubMed

    Hartmann, Klaas; Steel, Mike

    2006-08-01

    The Noah's Ark Problem (NAP) is a comprehensive cost-effectiveness methodology for biodiversity conservation that was introduced by Weitzman (1998) and utilizes the phylogenetic tree containing the taxa of interest to assess biodiversity. Given a set of taxa, each of which has a particular survival probability that can be increased at some cost, the NAP seeks to allocate limited funds to conserving these taxa so that the future expected biodiversity is maximized. Finding optimal solutions using this framework is a computationally difficult problem to which a simple and efficient "greedy" algorithm has been proposed in the literature and applied to conservation problems. We show that, although algorithms of this type cannot produce optimal solutions for the general NAP, there are two restricted scenarios of the NAP for which a greedy algorithm is guaranteed to produce optimal solutions. The first scenario requires the taxa to have equal conservation cost; the second scenario requires an ultrametric tree. The NAP assumes a linear relationship between the funding allocated to conservation of a taxon and the increased survival probability of that taxon. This relationship is briefly investigated and one variation is suggested that can also be solved using a greedy algorithm.

  11. Biofuel supply chain considering depreciation cost of installed plants

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Ramezankhani, Farshad; Giahi, Ramin; Farshbaf-Geranmayeh, Amir

    2016-06-01

    Due to the depletion of the fossil fuels and major concerns about the security of energy in the future to produce fuels, the importance of utilizing the renewable energies is distinguished. Nowadays there has been a growing interest for biofuels. Thus, this paper reveals a general optimization model which enables the selection of preprocessing centers for the biomass, biofuel plants, and warehouses to store the biofuels. The objective of this model is to maximize the total benefits. Costs of the model consist of setup cost of preprocessing centers, plants and warehouses, transportation costs, production costs, emission cost and the depreciation cost. At first, the deprecation cost of the centers is calculated by means of three methods. The model chooses the best depreciation method in each period by switching between them. A numerical example is presented and solved by CPLEX solver in GAMS software and finally, sensitivity analyses are accomplished.

  12. Design and Field Test of a WSN Platform Prototype for Long-Term Environmental Monitoring

    PubMed Central

    Lazarescu, Mihai T.

    2015-01-01

    Long-term wildfire monitoring using distributed in situ temperature sensors is an accurate, yet demanding environmental monitoring application, which requires long-life, low-maintenance, low-cost sensors and a simple, fast, error-proof deployment procedure. We present in this paper the most important design considerations and optimizations of all elements of a low-cost WSN platform prototype for long-term, low-maintenance pervasive wildfire monitoring, its preparation for a nearly three-month field test, the analysis of the causes of failure during the test and the lessons learned for platform improvement. The main components of the total cost of the platform (nodes, deployment and maintenance) are carefully analyzed and optimized for this application. The gateways are designed to operate with resources that are generally used for sensor nodes, while the requirements and cost of the sensor nodes are significantly lower. We define and test in simulation and in the field experiment a simple, but effective communication protocol for this application. It helps to lower the cost of the nodes and field deployment procedure, while extending the theoretical lifetime of the sensor nodes to over 16 years on a single 1 Ah lithium battery. PMID:25912349

  13. Space tourism optimized reusable spaceplane design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penn, J.P.; Lindley, C.A.

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about {dollar_sign}240 per pound ({dollar_sign}529/kg), or {dollar_sign}72,000 per passenger round-trip, goals should be about {dollar_sign}50 per pound ({dollar_sign}110/kg) or approximately {dollar_sign}15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flightmore » rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle{close_quote}s ability to also satisfy the traditional spacelift market is shown. {copyright} {ital 1997 American Institute of Physics.}« less

  14. Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.

    PubMed

    Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W

    2016-01-01

    To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.

  15. Optimal Energy Extraction From a Hot Water Geothermal Reservoir

    NASA Astrophysics Data System (ADS)

    Golabi, Kamal; Scherer, Charles R.; Tsang, Chin Fu; Mozumder, Sashi

    1981-01-01

    An analytical decision model is presented for determining optimal energy extraction rates from hot water geothermal reservoirs when cooled brine is reinjected into the hot water aquifer. This applied economic management model computes the optimal fluid pumping rate and reinjection temperature and the project (reservoir) life consistent with maximum present worth of the net revenues from sales of energy for space heating. The real value of product energy is assumed to increase with time, as is the cost of energy used in pumping the aquifer. The economic model is implemented by using a hydrothermal model that relates hydraulic pumping rate to the quality (temperature) of remaining heat energy in the aquifer. The results of a numerical application to space heating show that profit-maximizing extraction rate increases with interest (discount) rate and decreases as the rate of rise of real energy value increases. The economic life of the reservoir generally varies inversely with extraction rate. Results were shown to be sensitive to permeability, initial equilibrium temperature, well cost, and well life.

  16. Environmental statistics and optimal regulation

    NASA Astrophysics Data System (ADS)

    Sivak, David; Thomson, Matt

    2015-03-01

    The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  17. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  18. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1979-01-01

    The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.

  19. Cost effective simulation-based multiobjective optimization in the performance of an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Aittokoski, Timo; Miettinen, Kaisa

    2008-07-01

    Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.

  20. Cost-effectiveness of interventions to prevent alcohol-related disease and injury in Australia.

    PubMed

    Cobiac, Linda; Vos, Theo; Doran, Christopher; Wallace, Angela

    2009-10-01

    To evaluate cost-effectiveness of eight interventions for reducing alcohol-attributable harm and determine the optimal intervention mix. Interventions include volumetric taxation, advertising bans, an increase in minimum legal drinking age, licensing controls on operating hours, brief intervention (with and without general practitioner telemarketing and support), drink driving campaigns, random breath testing and residential treatment for alcohol dependence (with and without naltrexone). Cost-effectiveness is modelled over the life-time of the Australian population in 2003, with all costs and health outcomes evaluated from an Australian health sector perspective. Each intervention is compared with current practice, and the most cost-effective options are then combined to determine the optimal intervention mix. Cost-effectiveness is measured in 2003 Australian dollars per disability adjusted life year averted. Although current alcohol intervention in Australia (random breath testing) is cost-effective, if the current spending of $71 million could be invested in a more cost-effective combination of interventions, more than 10 times the amount of health gain could be achieved. Taken as a package of interventions, all seven preventive interventions would be a cost-effective investment that could lead to substantial improvement in population health; only residential treatment is not cost-effective. Based on current evidence, interventions to reduce harm from alcohol are highly recommended. The potential reduction in costs of treating alcohol-related diseases and injuries mean that substantial improvements in population health can be achieved at a relatively low cost to the health sector. © 2009 The Authors. Journal compilation © 2009 Society for the Study of Addiction.

  1. Optimizing conceptual aircraft designs for minimum life cycle cost

    NASA Technical Reports Server (NTRS)

    Johnson, Vicki S.

    1989-01-01

    A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.

  2. An algorithm for control system design via parameter optimization. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, P. K.

    1972-01-01

    An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.

  3. Optimal trajectory generation for mechanical arms. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Iemenschot, J. A.

    1972-01-01

    A general method of generating optimal trajectories between an initial and a final position of an n degree of freedom manipulator arm with nonlinear equations of motion is proposed. The method is based on the assumption that the time history of each of the coordinates can be expanded in a series of simple time functions. By searching over the coefficients of the terms in the expansion, trajectories which minimize the value of a given cost function can be obtained. The method has been applied to a planar three degree of freedom arm.

  4. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  5. Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.

    PubMed

    Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric

    2018-03-01

    Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.

  6. Optimal periodic proof test based on cost-effective and reliability criteria

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.

  7. Laboratory cost control and financial management software.

    PubMed

    Mayer, M

    1998-02-09

    Economical constraints within the health care system advocate the introduction of tighter control of costs in clinical laboratories. Detailed cost information forms the basis for cost control and financial management. Based on the cost information, proper decisions regarding priorities, procedure choices, personnel policies and investments can be made. This presentation outlines some principles of cost analysis, describes common limitations of cost analysis, and exemplifies use of software to achieve optimized cost control. One commercially available cost analysis software, LabCost, is described in some detail. In addition to provision of cost information, LabCost also serves as a general management tool for resource handling, accounting, inventory management and billing. The application of LabCost in the selection process of a new high throughput analyzer for a large clinical chemistry service is taken as an example for decisions that can be assisted by cost evaluation. It is concluded that laboratory management that wisely utilizes cost analysis to support the decision-making process will undoubtedly have a clear advantage over those laboratories that fail to employ cost considerations to guide their actions.

  8. Neural networks for feedback feedforward nonlinear control systems.

    PubMed

    Parisini, T; Zoppoli, R

    1994-01-01

    This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.

  9. Møller-Plesset perturbation theory gradient in the generalized hybrid orbital quantum mechanical and molecular mechanical method

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Sugita, Yuji; Ten-no, S.

    2010-02-01

    An analytic gradient expression is formulated and implemented for the second-order Møller-Plesset perturbation theory (MP2) based on the generalized hybrid orbital QM/MM method. The method enables us to obtain an accurate geometry at a reasonable computational cost. The performance of the method is assessed for various isomers of alanine dipepetide. We also compare the optimized structures of fumaramide-derived [2]rotaxane and cAMP-dependent protein kinase with experiment.

  10. Messaging with Cost-Optimized Interstellar Beacons

    NASA Technical Reports Server (NTRS)

    Benford, James; Benford, Gregory; Benford, Dominic

    2010-01-01

    On Earth, how would we build galactic-scale beacons to attract the attention of extraterrestrials, as some have suggested we should do? From the point of view of expense to a builder on Earth, experience shows an optimum trade-off. This emerges by minimizing the cost of producing a desired power density at long range, which determines the maximum range of detectability of a transmitted signal. We derive general relations for cost-optimal aperture and power. For linear dependence of capital cost on transmitter power and antenna area, minimum capital cost occurs when the cost is equally divided between antenna gain and radiated power. For nonlinear power-law dependence, a similar simple division occurs. This is validated in cost data for many systems; industry uses this cost optimum as a rule of thumb. Costs of pulsed cost-efficient transmitters are estimated from these relations by using current cost parameters ($/W, $/sq m) as a basis. We show the scaling and give examples of such beacons. Galactic-scale beacons can be built for a few billion dollars with our present technology. Such beacons have narrow "searchlight" beams and short "dwell times" when the beacon would be seen by an alien observer in their sky. More-powerful beacons are more efficient and have economies of scale: cost scales only linearly with range R, not as R(exp 2), so number of stars radiated to increases as the square of cost. On a cost basis, they will likely transmit at higher microwave frequencies, -10 GHz. The natural corridor to broadcast is along the galactic radius or along the local spiral galactic arm we are in. A companion paper asks "If someone like us were to produce a beacon, how should we look for it?"

  11. Near-Optimal Re-Entry Trajectories for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Chou, H.-C.; Ardema, M. D.; Bowles, J. V.

    1997-01-01

    A near-optimal guidance law for the descent trajectory for earth orbit re-entry of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. A methodology is developed to investigate using both bank angle and altitude as control variables and selecting parameters that maximize various performance functions. The method is based on the energy-state model of the aircraft equations of motion. The major task of this paper is to obtain optimal re-entry trajectories under a variety of performance goals: minimum time, minimum surface temperature, minimum heating, and maximum heading change; four classes of trajectories were investigated: no banking, optimal left turn banking, optimal right turn banking, and optimal bank chattering. The cost function is in general a weighted sum of all performance goals. In particular, the trade-off between minimizing heat load into the vehicle and maximizing cross range distance is investigated. The results show that the optimization methodology can be used to derive a wide variety of near-optimal trajectories.

  12. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  13. The incremental costs of recommended therapy versus real world therapy in type 2 diabetes patients

    PubMed Central

    Crivera, C.; Suh, D. C.; Huang, E. S.; Cagliero, E.; Grant, R. W.; Vo, L.; Shin, H. C.; Meigs, J. B.

    2008-01-01

    Background The goals of diabetes management have evolved over the past decade to become the attainment of near-normal glucose and cardiovascular risk factor levels. Improved metabolic control is achieved through optimized medication regimens, but costs specifically associated with such optimization have not been examined. Objective To estimate the incremental medication cost of providing optimal therapy to reach recommended goals versus actual therapy in patients with type 2 diabetes. Methods We randomly selected the charts of 601 type 2 diabetes patients receiving care from the outpatient clinics of Massachusetts General Hospital March 1, 1996–August 31, 1997 and abstracted clinical and medication data. We applied treatment algorithms based on 2004 clinical practice guidelines for hyperglycemia, hyperlipidemia, and hypertension to patients’ current medication therapy to determine how current medication regimens could be improved to attain recommended treatment goals. Four clinicians and three pharmacists independently applied the algorithms and reached consensus on recommended therapies. Mean incremental medication costs, the cost differences between current and recommended therapies, per patient (expressed in 2004 dollars) were calculated with 95% bootstrap confidence intervals (CIs). Results Mean patient age was 65 years old, mean duration of diabetes was 7.7 years, 32% had ideal glucose control, 25% had ideal systolic blood pressure, and 24% had ideal low-density lipoprotein cholesterol. Care for these diabetes patients was similar to that observed in recent national studies. If treatment algorithm recommendations were applied, the average annual medication cost/patient would increase from $1525 to $2164. Annual incremental costs/patient increased by $168 (95% CI $133–$206) for antihyperglycemic medications, $75 ($57–$93) for antihypertensive medications, $392 ($354–$434) for antihyperlipidemic medications, and $3 ($3–$4) for aspirin prophylaxis. Yearly incremental cost of recommended laboratory testing ranged from $77–$189/patient. Limitations Although baseline data come from the clinics of a single academic institution, collected in 1997, the care of these diabetes patients was remarkably similar to care recently observed nationally. In addition, the data are dependent on the medical record and may not accurately reflect patients’ actual experiences. Conclusion Average yearly incremental cost of optimizing drug regimens to achieve recommended treatment goals for type 2 diabetes was approximately $600/patient. These results provide valuable input for assessing the cost-effectiveness of improving comprehensive diabetes care. PMID:17076990

  14. Stability of Solutions to Classes of Traveling Salesman Problems.

    PubMed

    Niendorf, Moritz; Kabamba, Pierre T; Girard, Anouck R

    2016-04-01

    By performing stability analysis on an optimal tour for problems belonging to classes of the traveling salesman problem (TSP), this paper derives margins of optimality for a solution with respect to disturbances in the problem data. Specifically, we consider the asymmetric sequence-dependent TSP, where the sequence dependence is driven by the dynamics of a stack. This is a generalization of the symmetric non sequence-dependent version of the TSP. Furthermore, we also consider the symmetric sequence-dependent variant and the asymmetric non sequence-dependent variant. Amongst others these problems have applications in logistics and unmanned aircraft mission planning. Changing external conditions such as traffic or weather may alter task costs, which can render an initially optimal itinerary suboptimal. Instead of optimizing the itinerary every time task costs change, stability criteria allow for fast evaluation of whether itineraries remain optimal. This paper develops a method to compute stability regions for the best tour in a set of tours for the symmetric TSP and extends the results to the asymmetric problem as well as their sequence-dependent counterparts. As the TSP is NP-hard, heuristic methods are frequently used to solve it. The presented approach is also applicable to analyze stability regions for a tour obtained through application of the k -opt heuristic with respect to the k -neighborhood. A dimensionless criticality metric for edges is proposed, such that a high criticality of an edge indicates that the optimal tour is more susceptible to cost changes in that edge. Multiple examples demonstrate the application of the developed stability computation method as well as the edge criticality measure that facilitates an intuitive assessment of instances of the TSP.

  15. Joint pricing, inventory, and preservation decisions for deteriorating items with stochastic demand and promotional efforts

    NASA Astrophysics Data System (ADS)

    Soni, Hardik N.; Chauhan, Ashaba D.

    2018-03-01

    This study models a joint pricing, inventory, and preservation decision-making problem for deteriorating items subject to stochastic demand and promotional effort. The generalized price-dependent stochastic demand, time proportional deterioration, and partial backlogging rates are used to model the inventory system. The objective is to find the optimal pricing, replenishment, and preservation technology investment strategies while maximizing the total profit per unit time. Based on the partial backlogging and lost sale cases, we first deduce the criterion for optimal replenishment schedules for any given price and technology investment cost. Second, we show that, respectively, total profit per time unit is concave function of price and preservation technology cost. At the end, some numerical examples and the results of a sensitivity analysis are used to illustrate the features of the proposed model.

  16. Nonlinear optimization simplified by hypersurface deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stillinger, F.H.; Weber, T.A.

    1988-09-01

    A general strategy is advanced for simplifying nonlinear optimization problems, the ant-lion method. This approach exploits shape modifications of the cost-function hypersurface which distend basins surrounding low-lying minima (including global minima). By intertwining hypersurface deformations with steepest-descent displacements, the search is concentrated on a small relevant subset of all minima. Specific calculations demonstrating the value of this method are reported for the partitioning of two classes of irregular but nonrandom graphs, the prime-factor graphs and the pi graphs. We also indicate how this approach can be applied to the traveling salesman problem and to design layout optimization, and that itmore » may be useful in combination with simulated annealing strategies.« less

  17. Optimized random phase only holograms.

    PubMed

    Zea, Alejandro Velez; Barrera Ramirez, John Fredy; Torroba, Roberto

    2018-02-15

    We propose a simple and efficient technique capable of generating Fourier phase only holograms with a reconstruction quality similar to the results obtained with the Gerchberg-Saxton (G-S) algorithm. Our proposal is to use the traditional G-S algorithm to optimize a random phase pattern for the resolution, pixel size, and target size of the general optical system without any specific amplitude data. This produces an optimized random phase (ORAP), which is used for fast generation of phase only holograms of arbitrary amplitude targets. This ORAP needs to be generated only once for a given optical system, avoiding the need for costly iterative algorithms for each new target. We show numerical and experimental results confirming the validity of the proposal.

  18. A random optimization approach for inherent optic properties of nearshore waters

    NASA Astrophysics Data System (ADS)

    Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng

    2016-10-01

    Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.

  19. [Optimizing the financial impact of transitioning to transconjunctival vitrectomy and microincisional phacoemulsification].

    PubMed

    Cornut, P-L; Soldermann, Y; Robin, C; Barranco, R; Kerhoas, A; Burillon, C

    2013-12-01

    To report the financial impact of using modern lens and vitreoretinal surgical techniques. Bottom-up sterilization and consumables costs for new surgical techniques (microincisional coaxial phacoemulsification and transconjunctival sutureless vitrectomy) and the corresponding former techniques (phacoemulsification with 3.2-mm incision and 20G vitrectomy) were determined. These costs were compared to each other and to the target costs of the Diagnosis Related Groups for public hospitals (Groupes Homogènes de Séjours [GHS]) concerned, extracted from the analytic accounting data of the French National Cost Study (Étude Nationale des Coûts [ENC]) for 2009 (target=sum of sterilization costs posted under medical logistics, consumables, implantable medical devices, and special pharmaceuticals posted as direct expenses). For outpatient lens surgery with or without vitrectomy (GHS code: 02C05J): the ENC's target cost for 2009 was 339€ out of a total of 1432€. The cost detailed in this study was 4 % higher than the target cost when the procedure was performed using the former technique (3.2mm sutured incision) and 12 % lower when the procedure was performed using the new technique (1.8mm sutureless) after removing now unnecessary consumables and optimization of the technique. For level I retinal detachment surgeries (GHS code: 02C021): the ENC's 2009 target cost was 641€ out of a total of 3091€. The cost specified in this study was 1 % lower than the target cost when the procedure was done using the former technique (20-G vitrectomy) and 16 % less when the procedure was performed using the new technique (transconjunctival vitrectomy) after removal of now unnecessary consumables and optimization of the technique. Contrary to generally accepted ideas, implementing modern techniques in ocular surgery can result in direct cost and sterilization savings when the operator takes advantage of the possibilities these techniques offer in terms of simplification of the procedures to do away with consumables that are no longer necessary. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  20. A simple approach to optimal control of invasive species.

    PubMed

    Hastings, Alan; Hall, Richard J; Taylor, Caz M

    2006-12-01

    The problem of invasive species and their control is one of the most pressing applied issues in ecology today. We developed simple approaches based on linear programming for determining the optimal removal strategies of different stage or age classes for control of invasive species that are still in a density-independent phase of growth. We illustrate the application of this method to the specific example of invasive Spartina alterniflora in Willapa Bay, WA. For all such systems, linear programming shows in general that the optimal strategy in any time step is to prioritize removal of a single age or stage class. The optimal strategy adjusts which class is the focus of control through time and can be much more cost effective than prioritizing removal of the same stage class each year.

  1. Optimal Investment in HIV Prevention Programs: More Is Not Always Better

    PubMed Central

    Brandeau, Margaret L.; Zaric, Gregory S.

    2008-01-01

    This paper develops a mathematical/economic framework to address the following question: Given a particular population, a specific HIV prevention program, and a fixed amount of funds that could be invested in the program, how much money should be invested? We consider the impact of investment in a prevention program on the HIV sufficient contact rate (defined via production functions that describe the change in the sufficient contact rate as a function of expenditure on a prevention program), and the impact of changes in the sufficient contact rate on the spread of HIV (via an epidemic model). In general, the cost per HIV infection averted is not constant as the level of investment changes, so the fact that some investment in a program is cost effective does not mean that more investment in the program is cost effective. Our framework provides a formal means for determining how the cost per infection averted changes with the level of expenditure. We can use this information as follows: When the program has decreasing marginal cost per infection averted (which occurs, for example, with a growing epidemic and a prevention program with increasing returns to scale), it is optimal either to spend nothing on the program or to spend the entire budget. When the program has increasing marginal cost per infection averted (which occurs, for example, with a shrinking epidemic and a prevention program with decreasing returns to scale), it may be optimal to spend some but not all of the budget. The amount that should be spent depends on both the rate of disease spread and the production function for the prevention program. We illustrate our ideas with two examples: that of a needle exchange program, and that of a methadone maintenance program. PMID:19938440

  2. Economic Incentives in the Socially Optimal Management of Infectious Disease: When [Formula: see text] is Not Enough.

    PubMed

    Morin, B R; Kinzig, A P; Levin, S A; Perrings, C A

    2017-09-29

    Does society benefit from encouraging or discouraging private infectious disease-risk mitigation? Private individuals routinely mitigate infectious disease risks through the adoption of a range of precautions, from vaccination to changes in their contact with others. Such precautions have epidemiological consequences. Private disease-risk mitigation generally reduces both peak prevalence of symptomatic infection and the number of people who fall ill. At the same time, however, it can prolong an epidemic. A reduction in prevalence is socially beneficial. Prolongation of an epidemic is not. We find that for a large class of infectious diseases, private risk mitigation is socially suboptimal-either too low or too high. The social optimum requires either more or less private mitigation. Since private mitigation effort depends on the cost of mitigation and the cost of illness, interventions that change either of these costs may be used to alter mitigation decisions. We model the potential for instruments that affect the cost of illness to yield net social benefits. We find that where a disease is not very infectious or the duration of illness is short, it may be socially optimal to promote private mitigation effort by increasing the cost of illness. By contrast, where a disease is highly infectious or long lasting, it may be optimal to discourage private mitigation by reducing the cost of disease. Society would prefer a shorter, more intense, epidemic to a longer, less intense epidemic. There is, however, a region in parameter space where the relationship is more complicated. For moderately infectious diseases with medium infectious periods, the social optimum depends on interactions between prevalence and duration. Basic reproduction numbers are not sufficient to predict the social optimum.

  3. A novel heuristic for optimization aggregate production problem: Evidence from flat panel display in Malaysia

    NASA Astrophysics Data System (ADS)

    Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.

    2015-05-01

    Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.

  4. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekeyser, W., E-mail: Wouter.Dekeyser@kuleuven.be; Reiter, D.; Baelmans, M.

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation ofmore » the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.« less

  5. A model of optimal voluntary muscular control.

    PubMed

    FitzHugh, R

    1977-07-19

    In the absence of detailed knowledge of how the CNS controls a muscle through its motor fibers, a reasonable hypothesis is that of optimal control. This hypothesis is studied using a simplified mathematical model of a single muscle, based on A.V. Hill's equations, with series elastic element omitted, and with the motor signal represented by a single input variable. Two cost functions were used. The first was total energy expended by the muscle (work plus heat). If the load is a constant force, with no inertia, Hill's optimal velocity of shortening results. If the load includes a mass, analysis by optimal control theory shows that the motor signal to the muscle consists of three phases: (1) maximal stimulation to accelerate the mass to the optimal velocity as quickly as possible, (2) an intermediate level of stimulation to hold the velocity at its optimal value, once reached, and (3) zero stimulation, to permit the mass to slow down, as quickly as possible, to zero velocity at the specified distance shortened. If the latter distance is too small, or the mass too large, the optimal velocity is not reached, and phase (2) is absent. For lengthening, there is no optimal velocity; there are only two phases, zero stimulation followed by maximal stimulation. The second cost function was total time. The optimal control for shortening consists of only phases (1) and (3) above, and is identical to the minimal energy control whenever phase (2) is absent from the latter. Generalization of this model to include viscous loads and a series elastic element are discussed.

  6. Multimaterial topology optimization of contact problems using phase field regularization

    NASA Astrophysics Data System (ADS)

    Myśliński, Andrzej

    2018-01-01

    The numerical method to solve multimaterial topology optimization problems for elastic bodies in unilateral contact with Tresca friction is developed in the paper. The displacement of the elastic body in contact is governed by elliptic equation with inequality boundary conditions. The body is assumed to consists from more than two distinct isotropic elastic materials. The materials distribution function is chosen as the design variable. Since high contact stress appears during the contact phenomenon the aim of the structural optimization problem is to find such topology of the domain occupied by the body that the normal contact stress along the boundary of the body is minimized. The original cost functional is regularized using the multiphase volume constrained Ginzburg-Landau energy functional rather than the perimeter functional. The first order necessary optimality condition is recalled and used to formulate the generalized gradient flow equations of Allen-Cahn type. The optimal topology is obtained as the steady state of the phase transition governed by the generalized Allen-Cahn equation. As the interface width parameter tends to zero the transition of the phase field model to the level set model is studied. The optimization problem is solved numerically using the operator splitting approach combined with the projection gradient method. Numerical examples confirming the applicability of the proposed method are provided and discussed.

  7. Processing Technology Selection for Municipal Sewage Treatment Based on a Multi-Objective Decision Model under Uncertainty.

    PubMed

    Chen, Xudong; Xu, Zhongwen; Yao, Liming; Ma, Ning

    2018-03-05

    This study considers the two factors of environmental protection and economic benefits to address municipal sewage treatment. Based on considerations regarding the sewage treatment plant construction site, processing technology, capital investment, operation costs, water pollutant emissions, water quality and other indicators, we establish a general multi-objective decision model for optimizing municipal sewage treatment plant construction. Using the construction of a sewage treatment plant in a suburb of Chengdu as an example, this paper tests the general model of multi-objective decision-making for the sewage treatment plant construction by implementing a genetic algorithm. The results show the applicability and effectiveness of the multi-objective decision model for the sewage treatment plant. This paper provides decision and technical support for the optimization of municipal sewage treatment.

  8. Coordinating vendor-buyer decisions for imperfect quality items considering trade credit and fully backlogged shortages

    NASA Astrophysics Data System (ADS)

    Khanna, Aditi; Gautam, Prerna; Jaggi, Chandra K.

    2016-03-01

    Supply chain management has become a critical issue for modern business environments. In today's world of cooperative decision-making, individual decisions in order to reduce inventory costs may not lead to an overall optimal solution. Coordination is necessary among participants of supply chain to achieve better performance. There are legitimate and important efforts from the vendor to enhance the relation with buyer; one such effort is offering trade credit which has been a driver of growth and development of business between them. The cost of financing is a core consideration in effective financial management, in general and in context of business. Also, due to imperfect production a vendor may produce defective items which results in shortages. Motivated with these aspects, an integrated vendor-buyer inventory model is developed for imperfect quality items with allowable shortages; in which the vendor offers credit period to the buyer for payment. The objective is to minimize the total joint annual costs incurred by the vendor and the buyer by using integrated decision making approach. The expected total annual integrated cost is derived and a solution procedure is provided to find the optimal solution. Numerical analysis shows that the integrated model gives an impressive cost reduction, in comparison to independent decision policies by the vendor and the buyer.

  9. Requirements and approach for a space tourism launch system

    NASA Astrophysics Data System (ADS)

    Penn, Jay P.; Lindley, Charles A.

    2003-01-01

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about 240/pound (529/kg), or 72,000/passenger round-trip, goals should be about 50/pound (110/kg) or approximately 15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle's ability to satisfy the traditional spacelift market is also shown.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soer, Wouter

    LED luminaires have seen dramatic changes in cost breakdown over the past few years. The LED component cost, which until recently was the dominant portion of luminaire cost, has fallen to a level of the same order as the other luminaire components, such as the driver, housing, optics etc. With the current state of the technology, further luminaire performance improvement and cost reduction is realized most effectively by optimization of the whole system, rather than a single component. This project focuses on improving the integration between LEDs and drivers. Lumileds has developed a light engine platform based on low-cost high-powermore » LEDs and driver topologies optimized for integration with these LEDs on a single substrate. The integration of driver and LEDs enables an estimated luminaire cost reduction of about 25% for targeted applications, mostly due to significant reductions in driver and housing cost. The high-power LEDs are based on Lumileds’ patterned sapphire substrate flip-chip (PSS-FC) technology, affording reduced die fabrication and packaging cost compared to existing technology. Two general versions of PSS-FC die were developed in order to create the desired voltage and flux increments for driver integration: (i) small single-junction die (0.5 mm 2), optimal for distributed lighting applications, and (ii) larger multi-junction die (2 mm 2 and 4 mm 2) for high-power directional applications. Two driver topologies were developed: a tapped linear driver topology and a single-stage switch-mode topology, taking advantage of the flexible voltage configurations of the new PSS-FC die and the simplification opportunities enabled by integration of LEDs and driver on the same board. A prototype light engine was developed for an outdoor “core module” application based on the multi-junction PSS-FC die and the single-stage switch-mode driver. The light engine meets the project efficacy target of 128 lm/W at a luminous flux greater than 4100 lm, a correlated color temperature (CCT) of 4000K and a color rendering index (CRI) greater than 70.« less

  11. Assessing the shelf life of cost-efficient conservation plans for species at risk across gradients of agricultural land use.

    PubMed

    Robillard, Cassandra M; Kerr, Jeremy T

    2017-08-01

    High costs of land in agricultural regions warrant spatial prioritization approaches to conservation that explicitly consider land prices to produce protected-area networks that accomplish targets efficiently. However, land-use changes in such regions and delays between plan design and implementation may render optimized plans obsolete before implementation occurs. To measure the shelf life of cost-efficient conservation plans, we simulated a land-acquisition and restoration initiative aimed at conserving species at risk in Canada's farmlands. We accounted for observed changes in land-acquisition costs and in agricultural intensity based on censuses of agriculture taken from 1986 to 2011. For each year of data, we mapped costs and areas of conservation priority designated using Marxan. We compared plans to test for changes through time in the arrangement of high-priority sites and in the total cost of each plan. For acquisition costs, we measured the savings from accounting for prices during site selection. Land-acquisition costs and land-use intensity generally rose over time independent of inflation (24-78%), although rates of change were heterogeneous through space and decreased in some areas. Accounting for spatial variation in land price lowered the cost of conservation plans by 1.73-13.9%, decreased the range of costs by 19-82%, and created unique solutions from which to choose. Despite the rise in plan costs over time, the high conservation priority of particular areas remained consistent. Delaying conservation in these critical areas may compromise what optimized conservation plans can achieve. In the case of Canadian farmland, rapid conservation action is cost-effective, even with moderate levels of uncertainty in how to implement restoration goals. © 2016 Society for Conservation Biology.

  12. Game theory and risk-based leveed river system planning with noncooperation

    NASA Astrophysics Data System (ADS)

    Hui, Rui; Lund, Jay R.; Madani, Kaveh

    2016-01-01

    Optimal risk-based levee designs are usually developed for economic efficiency. However, in river systems with multiple levees, the planning and maintenance of different levees are controlled by different agencies or groups. For example, along many rivers, levees on opposite riverbanks constitute a simple leveed river system with each levee designed and controlled separately. Collaborative planning of the two levees can be economically optimal for the whole system. Independent and self-interested landholders on opposite riversides often are willing to separately determine their individual optimal levee plans, resulting in a less efficient leveed river system from an overall society-wide perspective (the tragedy of commons). We apply game theory to simple leveed river system planning where landholders on each riverside independently determine their optimal risk-based levee plans. Outcomes from noncooperative games are analyzed and compared with the overall economically optimal outcome, which minimizes net flood cost system-wide. The system-wide economically optimal solution generally transfers residual flood risk to the lower-valued side of the river, but is often impractical without compensating for flood risk transfer to improve outcomes for all individuals involved. Such compensation can be determined and implemented with landholders' agreements on collaboration to develop an economically optimal plan. By examining iterative multiple-shot noncooperative games with reversible and irreversible decisions, the costs of myopia for the future in making levee planning decisions show the significance of considering the externalities and evolution path of dynamic water resource problems to improve decision-making.

  13. Evolutionary responses to climate change in parasitic systems.

    PubMed

    Chaianunporn, Thotsapol; Hovestadt, Thomas

    2015-08-01

    Species may respond to climate change in many ecological and evolutionary ways. In this simulation study, we focus on the concurrent evolution of three traits in response to climate change, namely dispersal probability, temperature tolerance (or niche width), and temperature preference (optimal habitat). More specifically, we consider evolutionary responses in host species involved in different types of interaction, that is parasitism or commensalism, and for low or high costs of a temperature tolerance-fertility trade-off (cost of generalization). We find that host species potentially evolve all three traits simultaneously in response to increasing temperature but that the evolutionary response interacts and may be compensatory depending on the conditions. The evolutionary adjustment of temperature preference is slower in the parasitism than in commensalism scenario. Parasitism, in turn, selects for higher temperature tolerance and increased dispersal. High costs for temperature tolerance (i.e. generalization) restrict evolution of tolerance and thus lead to a faster response in temperature preference than that observed under low costs. These results emphasize the possible role of biotic interactions and the importance of 'multidimensional' evolutionary responses to climate change. © 2015 John Wiley & Sons Ltd.

  14. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  15. A method for determining optimum phasing of a multiphase propulsion system for a single-stage vehicle with linearized inert weight

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1974-01-01

    A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.

  16. Evolutionary conservation of codon optimality reveals hidden signatures of cotranslational folding.

    PubMed

    Pechmann, Sebastian; Frydman, Judith

    2013-02-01

    The choice of codons can influence local translation kinetics during protein synthesis. Whether codon preference is linked to cotranslational regulation of polypeptide folding remains unclear. Here, we derive a revised translational efficiency scale that incorporates the competition between tRNA supply and demand. Applying this scale to ten closely related yeast species, we uncover the evolutionary conservation of codon optimality in eukaryotes. This analysis reveals universal patterns of conserved optimal and nonoptimal codons, often in clusters, which associate with the secondary structure of the translated polypeptides independent of the levels of expression. Our analysis suggests an evolved function for codon optimality in regulating the rhythm of elongation to facilitate cotranslational polypeptide folding, beyond its previously proposed role of adapting to the cost of expression. These findings establish how mRNA sequences are generally under selection to optimize the cotranslational folding of corresponding polypeptides.

  17. Role of the parameters involved in the plan optimization based on the generalized equivalent uniform dose and radiobiological implications

    NASA Astrophysics Data System (ADS)

    Widesott, L.; Strigari, L.; Pressello, M. C.; Benassi, M.; Landoni, V.

    2008-03-01

    We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUDmax and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUDmax with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUDmax and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V40 and V50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall.

  18. Epidemiological game-theory dynamics of chickenpox vaccination in the USA and Israel

    PubMed Central

    Liu, Jingzhou; Kochin, Beth F.; Tekle, Yonas I.; Galvani, Alison P.

    2012-01-01

    The general consensus from epidemiological game-theory studies is that vaccination coverage driven by self-interest (Nash vaccination) is generally lower than group-optimal coverage (utilitarian vaccination). However, diseases that become more severe with age, such as chickenpox, pose an exception to this general consensus. An individual choice to be vaccinated against chickenpox has the potential to harm those not vaccinated by increasing the average age at infection and thus the severity of infection as well as those already vaccinated by increasing the probability of breakthrough infection. To investigate the effects of these externalities on the relationship between Nash and utilitarian vaccination coverages for chickenpox, we developed a game-theory epidemic model that we apply to the USA and Israel, which has different vaccination programmes, vaccination and treatment costs, as well as vaccination coverage levels. We find that the increase in chickenpox severity with age can reverse the typical relationship between utilitarian and Nash vaccination coverages in both the USA and Israel. Our model suggests that to obtain herd immunity of chickenpox vaccination, subsidies or external regulation should be used if vaccination costs are high. By contrast, for low vaccination costs, improving awareness of the vaccine and the potential cost of chickenpox infection is crucial. PMID:21632611

  19. Epidemiological game-theory dynamics of chickenpox vaccination in the USA and Israel.

    PubMed

    Liu, Jingzhou; Kochin, Beth F; Tekle, Yonas I; Galvani, Alison P

    2012-01-07

    The general consensus from epidemiological game-theory studies is that vaccination coverage driven by self-interest (Nash vaccination) is generally lower than group-optimal coverage (utilitarian vaccination). However, diseases that become more severe with age, such as chickenpox, pose an exception to this general consensus. An individual choice to be vaccinated against chickenpox has the potential to harm those not vaccinated by increasing the average age at infection and thus the severity of infection as well as those already vaccinated by increasing the probability of breakthrough infection. To investigate the effects of these externalities on the relationship between Nash and utilitarian vaccination coverages for chickenpox, we developed a game-theory epidemic model that we apply to the USA and Israel, which has different vaccination programmes, vaccination and treatment costs, as well as vaccination coverage levels. We find that the increase in chickenpox severity with age can reverse the typical relationship between utilitarian and Nash vaccination coverages in both the USA and Israel. Our model suggests that to obtain herd immunity of chickenpox vaccination, subsidies or external regulation should be used if vaccination costs are high. By contrast, for low vaccination costs, improving awareness of the vaccine and the potential cost of chickenpox infection is crucial.

  20. Cost effectiveness of a general practice chronic disease management plan for coronary heart disease in Australia.

    PubMed

    Chew, Derek P; Carter, Robert; Rankin, Bree; Boyden, Andrew; Egan, Helen

    2010-05-01

    The cost effectiveness of a general practice-based program for managing coronary heart disease (CHD) patients in Australia remains uncertain. We have explored this through an economic model. A secondary prevention program based on initial clinical assessment and 3 monthly review, optimising of pharmacotherapies and lifestyle modification, supported by a disease registry and financial incentives for quality of care and outcomes achieved was assessed in terms of incremental cost effectiveness ratio (ICER), in Australian dollars per disability adjusted life year (DALY) prevented. Based on 2006 estimates, 263 487 DALYs were attributable to CHD in Australia. The proposed program would add $115 650 000 to the annual national heath expenditure. Using an estimated 15% reduction in death and disability and a 40% estimated program uptake, the program's ICER is $8081 per DALY prevented. With more conservative estimates of effectiveness and uptake, estimates of up to $38 316 per DALY are observed in sensitivity analysis. Although innovation in CHD management promises improved future patient outcomes, many therapies and strategies proven to reduce morbidity and mortality are available today. A general practice-based program for the optimal application of current therapies is likely to be cost-effective and provide substantial and sustainable benefits to the Australian community.

  1. The public health implications of asthma.

    PubMed Central

    Bousquet, Jean; Bousquet, Philippe J.; Godard, Philippe; Daures, Jean-Pierre

    2005-01-01

    Asthma is a very common chronic disease that occurs in all age groups and is the focus of various clinical and public health interventions. Both morbidity and mortality from asthma are significant. The number of disability-adjusted life years (DALYs) lost due to asthma worldwide is similar to that for diabetes, liver cirrhosis and schizophrenia. Asthma management plans have, however, reduced mortality and severity in countries where they have been applied. Several barriers reduce the availability, affordability, dissemination and efficacy of optimal asthma management plans in both developed and developing countries. The workplace environment contributes significantly to the general burden of asthma. Patients with occupational asthma have higher rates of hospitalization and mortality than healthy workers. The surveillance of asthma as part of a global WHO programme is essential. The economic cost of asthma is considerable both in terms of direct medical costs (such as hospital admissions and the cost of pharmaceuticals) and indirect medical costs (such as time lost from work and premature death). Direct costs are significant in most countries. In order to reduce costs and improve quality of care, employers and health plans are exploring more precisely targeted ways of controlling rapidly rising health costs. Poor control of asthma symptoms is a major issue that can result in adverse clinical and economic outcomes. A model of asthma costs is needed to aid attempts to reduce them while permitting optimal management of the disease. This paper presents a discussion of the burden of asthma and its socioeconomic implications and proposes a model to predict the costs incurred by the disease. PMID:16175830

  2. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  3. Development of Miniaturized Optimized Smart Sensors (MOSS) for space plasmas

    NASA Technical Reports Server (NTRS)

    Young, D. T.

    1993-01-01

    The cost of space plasma sensors is high for several reasons: (1) Most are one-of-a-kind and state-of-the-art, (2) the cost of launch to orbit is high, (3) ruggedness and reliability requirements lead to costly development and test programs, and (4) overhead is added by overly elaborate or generalized spacecraft interface requirements. Possible approaches to reducing costs include development of small 'sensors' (defined as including all necessary optics, detectors, and related electronics) that will ultimately lead to cheaper missions by reducing (2), improving (3), and, through work with spacecraft designers, reducing (4). Despite this logical approach, there is no guarantee that smaller sensors are necessarily either better or cheaper. We have previously advocated applying analytical 'quality factors' to plasma sensors (and spacecraft) and have begun to develop miniaturized particle optical systems by applying quantitative optimization criteria. We are currently designing a Miniaturized Optimized Smart Sensor (MOSS) in which miniaturized electronics (e.g., employing new power supply topology and extensive us of gate arrays and hybrid circuits) are fully integrated with newly developed particle optics to give significant savings in volume and mass. The goal of the SwRI MOSS program is development of a fully self-contained and functional plasma sensor weighing 1 lb and requiring 1 W. MOSS will require only a typical spacecraft DC power source (e.g., 30 V) and command/data interfaces in order to be fully functional, and will provide measurement capabilities comparable in most ways to current sensors.

  4. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    NASA Astrophysics Data System (ADS)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  5. Reduction of liquid hydrogen boiloff: Optimal reliquefaction system design and cost study

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A preliminary design and economic analysis of candidate hydrogen reliquefaction systems was performed. All candidate systems are of the same general type; differences and size, compressor arrangement, and amount of hydrogen venting. The potential application of the hydrogen reliquefaction will be to reduce the boil-off from the 850,000 gallon storage dewars at LC-39.

  6. Economic Planning for Multicounty Rural Areas: Application of a Linear Programming Model in Northwest Arkansas. Technical Bulletin No. 1653.

    ERIC Educational Resources Information Center

    Williams, Daniel G.

    Planners in multicounty rural areas can use the Rural Development, Activity Analysis Planning (RDAAP) model to try to influence the optimal growth of their areas among different general economic goals. The model implies that best industries for rural areas have: high proportion of imported inputs; low transportation costs; high value added/output…

  7. The Robustness of Designs for Trials with Nested Data against Incorrect Initial Intracluster Correlation Coefficient Estimates

    ERIC Educational Resources Information Center

    Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.

    2010-01-01

    In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…

  8. The optimal outcomes of post-hospital care under medicare.

    PubMed Central

    Kane, R L; Chen, Q; Finch, M; Blewett, L; Burns, R; Moskowitz, M

    2000-01-01

    OBJECTIVE: To estimate the differences in functional outcomes attributable to discharge to one of four different venues for post-hospital care for each of five different types of illness associated with post-hospital care: stroke, chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), hip procedures, and hip fracture, and to estimate the costs and benefits associated with discharge to the type of care that was estimated to produce the greatest improvement. STUDY SETTING/DATA SOURCES: Consecutive patients with any of the target diagnoses were enrolled from 52 hospitals in three cities. Data sources included interviews with patients or their proxies, medical record reviews, and the Medicare Automated Data Retrieval System. ANALYSIS: A two-stage regression model looked first at the factors associated with discharge to each type of post-hospital care and then at the outcomes associated with each location. An instrumental variables technique was used to adjust for selection bias. A predictive model was created for each patient to estimate how that person would have fared had she or he been discharged to each type of care. The optimal discharge location was determined as that which produced the greatest improvement in function after adjusting for patients' baseline characteristics. The costs of discharge to the optimal type of care was based on the differences in mean costs for each location. DATA COLLECTION/EXTRACTION METHODS: Data were collected from patients or their proxies at discharge from hospital and at three post-discharge follow-up times: six weeks, six months, and one year. In addition, the medical records for each participant were abstracted by trained abstractors, using a modification of the Medisgroups method, and Medicare data were summarized for the years before and after the hospitalization. PRINCIPAL FINDINGS: In general, patients discharged to nursing homes fared worst and those sent home with home health care or to rehabilitation did best. Because the cost of rehabilitation is high, greater use of home care could result in improved outcomes at modest or no additional cost. CONCLUSIONS: Better decisions about where to discharge patients could improve the course of many patients. It is possible to save money by making wiser discharge planning decisions. Nursing homes are generally associated with poorer outcomes and higher costs than the other post-hospital care modalities. PMID:10966088

  9. Optimal control of information epidemics modeled as Maki Thompson rumors

    NASA Astrophysics Data System (ADS)

    Kandhway, Kundan; Kuri, Joy

    2014-12-01

    We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.

  10. Static and Dynamic Aeroelastic Tailoring With Variable Camber Control

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.

    2016-01-01

    This paper examines the use of a Variable Camber Continuous Trailing Edge Flap (VCCTEF) system for aeroservoelastic optimization of a transport wingbox. The quasisteady and unsteady motions of the flap system are utilized as design variables, along with patch-level structural variables, towards minimizing wingbox weight via maneuver load alleviation and active flutter suppression. The resulting system is, in general, very successful at removing structural weight in a feasible manner. Limitations to this success are imposed by including load cases where the VCCTEF system is not active (open-loop) in the optimization process, and also by including actuator operating cost constraints.

  11. Fourier Spectral Filter Array for Optimal Multispectral Imaging.

    PubMed

    Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo

    2016-04-01

    Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.

  12. Cost-Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.

    PubMed

    Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris

    2017-12-27

    Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared to the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon microsimulation model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess whether optimal screening strategies would change. Colonoscopy every 5 years, starting at age 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in population is not clear. Using a Microsimulation Screening Analysis-Colon microsimulation model, we found screening of patients with cystic fibrosis for CRC to be cost-effective. Due to the higher risk in these patients for CRC, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  13. Demand side management in recycling and electricity retail pricing

    NASA Astrophysics Data System (ADS)

    Kazan, Osman

    This dissertation addresses several problems from the recycling industry and electricity retail market. The first paper addresses a real-life scheduling problem faced by a national industrial recycling company. Based on their practices, a scheduling problem is defined, modeled, analyzed, and a solution is approximated efficiently. The recommended application is tested on the real-life data and randomly generated data. The scheduling improvements and the financial benefits are presented. The second problem is from electricity retail market. There are well-known patterns in daily usage in hours. These patterns change in shape and magnitude by seasons and days of the week. Generation costs are multiple times higher during the peak hours of the day. Yet most consumers purchase electricity at flat rates. This work explores analytic pricing tools to reduce peak load electricity demand for retailers. For that purpose, a nonlinear model that determines optimal hourly prices is established based on two major components: unit generation costs and consumers' utility. Both are analyzed and estimated empirically in the third paper. A pricing model is introduced to maximize the electric retailer's profit. As a result, a closed-form expression for the optimal price vector is obtained. Possible scenarios are evaluated for consumers' utility distribution. For the general case, we provide a numerical solution methodology to obtain the optimal pricing scheme. The models recommended are tested under various scenarios that consider consumer segmentation and multiple pricing policies. The recommended model reduces the peak load significantly in most cases. Several utility companies offer hourly pricing to their customers. They determine prices using historical data of unit electricity cost over time. In this dissertation we develop a nonlinear model that determines optimal hourly prices with parameter estimation. The last paper includes a regression analysis of the unit generation cost function obtained from Independent Service Operators. A consumer experiment is established to replicate the peak load behavior. As a result, consumers' utility function is estimated and optimal retail electricity prices are computed.

  14. Cost Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.

    PubMed

    Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris

    2018-02-01

    Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared with the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess if optimal screening strategies would change. Colonoscopy every 5 years, starting at an age of 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in this population is not clear. Using a Microsimulation Screening Analysis-Colon model, we found screening of patients with cystic fibrosis for CRC to be cost effective. Because of the higher risk of CRC in these patients, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.

  15. Conceptual design of the 6 MW Mod-5A wind turbine generator

    NASA Technical Reports Server (NTRS)

    Barton, R. S.; Lucas, W. C.

    1982-01-01

    The General Electric Company, Advanced Energy Programs Department, is designing under DOE/NASA sponsorship the MOD-5A wind turbine system which must generate electricity for 3.75 cent/KWH (1980) or less. During the Conceptual Design Phase, completed in March, 1981, the MOD-5A WTG system size and features were established as a result of tradeoff and optimization studies driven by minimizing the system cost of energy (COE). This led to a 400' rotor diameter size. The MOD-5A system which resulted is defined in this paper along with the operational and environmental factors that drive various portions of the design. Development of weight and cost estimating relationships (WCER's) and their use in optimizing the MOD-5A are discussed. The results of major tradeoff studies are also presented. Subsystem COE contributions for the 100th unit are shown along with the method of computation. Detailed descriptions of the major subsystems are given, in order that the results of the various trade and optimization studies can be more readily visualized.

  16. A flowsheet model of a well-mixed fluidized bed dryer: Applications in controllability assessment and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langrish, T.A.G.; Harvey, A.C.

    2000-01-01

    A model of a well-mixed fluidized-bed dryer within a process flowsheeting package (SPEEDUP{trademark}) has been developed and applied to a parameter sensitivity study, a steady-state controllability analysis and an optimization study. This approach is more general and would be more easily applied to a complex flowsheet than one which relied on stand-alone dryer modeling packages. The simulation has shown that industrial data may be fitted to the model outputs with sensible values of unknown parameters. For this case study, the parameter sensitivity study has found that the heat loss from the dryer and the critical moisture content of the materialmore » have the greatest impact on the dryer operation at the current operating point. An optimization study has demonstrated the dominant effect of the heat loss from the dryer on the current operating cost and the current operating conditions, and substantial cost savings (around 50%) could be achieved with a well-insulated and airtight dryer, for the specific case studied here.« less

  17. Riemannian geometric approach to human arm dynamics, movement optimization, and invariance

    NASA Astrophysics Data System (ADS)

    Biess, Armin; Flash, Tamar; Liebermann, Dario G.

    2011-03-01

    We present a generally covariant formulation of human arm dynamics and optimization principles in Riemannian configuration space. We extend the one-parameter family of mean-squared-derivative (MSD) cost functionals from Euclidean to Riemannian space, and we show that they are mathematically identical to the corresponding dynamic costs when formulated in a Riemannian space equipped with the kinetic energy metric. In particular, we derive the equivalence of the minimum-jerk and minimum-torque change models in this metric space. Solutions of the one-parameter family of MSD variational problems in Riemannian space are given by (reparametrized) geodesic paths, which correspond to movements with least muscular effort. Finally, movement invariants are derived from symmetries of the Riemannian manifold. We argue that the geometrical structure imposed on the arm’s configuration space may provide insights into the emerging properties of the movements generated by the motor system.

  18. Management of a stage-structured insect pest: an application of approximate optimization.

    PubMed

    Hackett, Sean C; Bonsall, Michael B

    2018-06-01

    Ecological decision problems frequently require the optimization of a sequence of actions over time where actions may have both immediate and downstream effects. Dynamic programming can solve such problems only if the dimensionality is sufficiently low. Approximate dynamic programming (ADP) provides a suite of methods applicable to problems of arbitrary complexity at the expense of guaranteed optimality. The most easily generalized method is the look-ahead policy: a brute-force algorithm that identifies reasonable actions by constructing and solving a series of temporally truncated approximations of the full problem over a defined planning horizon. We develop and apply this approach to a pest management problem inspired by the Mediterranean fruit fly, Ceratitis capitata. The model aims to minimize the cumulative costs of management actions and medfly-induced losses over a single 16-week season. The medfly population is stage-structured and grows continuously while management decisions are made at discrete, weekly intervals. For each week, the model chooses between inaction, insecticide application, or one of six sterile insect release ratios. Look-ahead policy performance is evaluated over a range of planning horizons, two levels of crop susceptibility to medfly and three levels of pesticide persistence. In all cases, the actions proposed by the look-ahead policy are contrasted to those of a myopic policy that minimizes costs over only the current week. We find that look-ahead policies always out-performed a myopic policy and decision quality is sensitive to the temporal distribution of costs relative to the planning horizon: it is beneficial to extend the planning horizon when it excludes pertinent costs. However, longer planning horizons may reduce decision quality when major costs are resolved imminently. ADP methods such as the look-ahead-policy-based approach developed here render questions intractable to dynamic programming amenable to inference but should be applied carefully as their flexibility comes at the expense of guaranteed optimality. However, given the complexity of many ecological management problems, the capacity to propose a strategy that is "good enough" using a more representative problem formulation may be preferable to an optimal strategy derived from a simplified model. © 2018 by the Ecological Society of America.

  19. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    PubMed

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.

  20. Affordable Design: A Methodolgy to Implement Process-Based Manufacturing Cost into the Traditional Performance-Focused Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Bao, Han P.; Samareh, J. A.

    2000-01-01

    The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.

  1. Construction Performance Optimization toward Green Building Premium Cost Based on Greenship Rating Tools Assessment with Value Engineering Method

    NASA Astrophysics Data System (ADS)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Riswanto; Budiman, Rachmat

    2017-07-01

    Green building concept becomes important in current building life cycle to mitigate environment issues. The purpose of this paper is to optimize building construction performance towards green building premium cost, achieving green building rating tools with optimizing life cycle cost. Therefore, this study helps building stakeholder determining building fixture to achieve green building certification target. Empirically the paper collects data of green building in the Indonesian construction industry such as green building fixture, initial cost, operational and maintenance cost, and certification score achievement. After that, using value engineering method optimized green building fixture based on building function and cost aspects. Findings indicate that construction performance optimization affected green building achievement with increasing energy and water efficiency factors and life cycle cost effectively especially chosen green building fixture.

  2. Digital robust active control law synthesis for large order systems using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1987-01-01

    This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.

  3. Tetrahedron Formation Control

    NASA Technical Reports Server (NTRS)

    Petruzzo, Charles; Guzman, Jose

    2004-01-01

    This paper considers the preliminary development of a general optimization procedure for tetrahedron formation control. The maneuvers are assumed to be impulsive and a multi-stage optimization method is employed. The stages include (1) targeting to a fixed tetrahedron location and orientation, and (2) rotating and translating the tetrahedron. The number of impulsive maneuvers can also be varied. As the impulse locations and times change, new arcs are computed using a differential corrections scheme that varies the impulse magnitudes and directions. The result is a continuous trajectory with velocity discontinuities. The velocity discontinuities are then used to formulate the cost function. Direct optimization techniques are employed. The procedure is applied to the NASA Goddard Magnetospheric Multi-Scale (MMS) mission to compute preliminary formation control fuel requirements.

  4. Optimal systems of geoscience surveying A preliminary discussion

    NASA Astrophysics Data System (ADS)

    Shoji, Tetsuya

    2006-10-01

    In any geoscience survey, each survey technique must be effectively applied, and many techniques are often combined optimally. An important task is to get necessary and sufficient information to meet the requirement of the survey. A prize-penalty function quantifies effectiveness of the survey, and hence can be used to determine the best survey technique. On the other hand, an information-cost function can be used to determine the optimal combination of survey techniques on the basis of the geoinformation obtained. Entropy is available to evaluate geoinformation. A simple model suggests the possibility that low-resolvability techniques are generally applied at early stages of survey, and that higher-resolvability techniques should alternate with lower-resolvability ones with the progress of the survey.

  5. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  6. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    NASA Astrophysics Data System (ADS)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  7. Trajectory optimization for dynamic couch rotation during volumetric modulated arc radiotherapy

    NASA Astrophysics Data System (ADS)

    Smyth, Gregory; Bamber, Jeffrey C.; Evans, Philip M.; Bedford, James L.

    2013-11-01

    Non-coplanar radiation beams are often used in three-dimensional conformal and intensity modulated radiotherapy to reduce dose to organs at risk (OAR) by geometric avoidance. In volumetric modulated arc radiotherapy (VMAT) non-coplanar geometries are generally achieved by applying patient couch rotations to single or multiple full or partial arcs. This paper presents a trajectory optimization method for a non-coplanar technique, dynamic couch rotation during VMAT (DCR-VMAT), which combines ray tracing with a graph search algorithm. Four clinical test cases (partial breast, brain, prostate only, and prostate and pelvic nodes) were used to evaluate the potential OAR sparing for trajectory-optimized DCR-VMAT plans, compared with standard coplanar VMAT. In each case, ray tracing was performed and a cost map reflecting the number of OAR voxels intersected for each potential source position was generated. The least-cost path through the cost map, corresponding to an optimal DCR-VMAT trajectory, was determined using Dijkstra’s algorithm. Results show that trajectory optimization can reduce dose to specified OARs for plans otherwise comparable to conventional coplanar VMAT techniques. For the partial breast case, the mean heart dose was reduced by 53%. In the brain case, the maximum lens doses were reduced by 61% (left) and 77% (right) and the globes by 37% (left) and 40% (right). Bowel mean dose was reduced by 15% in the prostate only case. For the prostate and pelvic nodes case, the bowel V50 Gy and V60 Gy were reduced by 9% and 45% respectively. Future work will involve further development of the algorithm and assessment of its performance over a larger number of cases in site-specific cohorts.

  8. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  9. Gradient-Based Optimization of Wind Farms with Different Turbine Heights: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanley, Andrew P. J.; Thomas, Jared; Ning, Andrew

    Turbine wakes reduce power production in a wind farm. Current wind farms are generally built with turbines that are all the same height, but if wind farms included turbines with different tower heights, the cost of energy (COE) may be reduced. We used gradient-based optimization to demonstrate a method to optimize wind farms with varied hub heights. Our study includes a modified version of the FLORIS wake model that accommodates three-dimensional wakes integrated with a tower structural model. Our purpose was to design a process to minimize the COE of a wind farm through layout optimization and varying turbine hubmore » heights. Results indicate that when a farm is optimized for layout and height with two separate height groups, COE can be lowered by as much as 5%-9%, compared to a similar layout and height optimization where all the towers are the same. The COE has the best improvement in farms with high turbine density and a low wind shear exponent.« less

  10. Gradient-Based Optimization of Wind Farms with Different Turbine Heights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanley, Andrew P. J.; Thomas, Jared; Ning, Andrew

    Turbine wakes reduce power production in a wind farm. Current wind farms are generally built with turbines that are all the same height, but if wind farms included turbines with different tower heights, the cost of energy (COE) may be reduced. We used gradient-based optimization to demonstrate a method to optimize wind farms with varied hub heights. Our study includes a modified version of the FLORIS wake model that accommodates three-dimensional wakes integrated with a tower structural model. Our purpose was to design a process to minimize the COE of a wind farm through layout optimization and varying turbine hubmore » heights. Results indicate that when a farm is optimized for layout and height with two separate height groups, COE can be lowered by as much as 5%-9%, compared to a similar layout and height optimization where all the towers are the same. The COE has the best improvement in farms with high turbine density and a low wind shear exponent.« less

  11. Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase

    NASA Technical Reports Server (NTRS)

    Born, G. J.; Kai, T.

    1975-01-01

    A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.

  12. Data on cost-optimal Nearly Zero Energy Buildings (NZEBs) across Europe.

    PubMed

    D'Agostino, Delia; Parker, Danny

    2018-04-01

    This data article refers to the research paper A model for the cost-optimal design of Nearly Zero Energy Buildings (NZEBs) in representative climates across Europe [1]. The reported data deal with the design optimization of a residential building prototype located in representative European locations. The study focus on the research of cost-optimal choices and efficiency measures in new buildings depending on the climate. The data linked within this article relate to the modelled building energy consumption, renewable production, potential energy savings, and costs. Data allow to visualize energy consumption before and after the optimization, selected efficiency measures, costs and renewable production. The reduction of electricity and natural gas consumption towards the NZEB target can be visualized together with incremental and cumulative costs in each location. Further data is available about building geometry, costs, CO 2 emissions, envelope, materials, lighting, appliances and systems.

  13. Offshore wind farm layout optimization

    NASA Astrophysics Data System (ADS)

    Elkinton, Christopher Neil

    Offshore wind energy technology is maturing in Europe and is poised to make a significant contribution to the U.S. energy production portfolio. Building on the knowledge the wind industry has gained to date, this dissertation investigates the influences of different site conditions on offshore wind farm micrositing---the layout of individual turbines within the boundaries of a wind farm. For offshore wind farms, these conditions include, among others, the wind and wave climates, water depths, and soil conditions at the site. An analysis tool has been developed that is capable of estimating the cost of energy (COE) from offshore wind farms. For this analysis, the COE has been divided into several modeled components: major costs (e.g. turbines, electrical interconnection, maintenance, etc.), energy production, and energy losses. By treating these component models as functions of site-dependent parameters, the analysis tool can investigate the influence of these parameters on the COE. Some parameters result in simultaneous increases of both energy and cost. In these cases, the analysis tool was used to determine the value of the parameter that yielded the lowest COE and, thus, the best balance of cost and energy. The models have been validated and generally compare favorably with existing offshore wind farm data. The analysis technique was then paired with optimization algorithms to form a tool with which to design offshore wind farm layouts for which the COE was minimized. Greedy heuristic and genetic optimization algorithms have been tuned and implemented. The use of these two algorithms in series has been shown to produce the best, most consistent solutions. The influences of site conditions on the COE have been studied further by applying the analysis and optimization tools to the initial design of a small offshore wind farm near the town of Hull, Massachusetts. The results of an initial full-site analysis and optimization were used to constrain the boundaries of the farm. A more thorough optimization highlighted the features of the area that would result in a minimized COE. The results showed reasonable layout designs and COE estimates that are consistent with existing offshore wind farms.

  14. (abstract) Science-Project Interaction in the Low-Cost Mission

    NASA Technical Reports Server (NTRS)

    Wall, Stephen D.

    1994-01-01

    Large, complex, and highly optimized missions have performed most of the preliminary reconnaisance of the solar system. As a result we have now mapped significant fractions of its total surface (or surface-equivalent) area. Now, however, scientific exploration of the solar system is undergoing a major change in scale, and existing missions find it necessary to limit costs while fulfilling existing goals. In the future, NASA's Discovery program will continue the reconnaisance, exploration, and diagnostic phases of planetary research using lower cost missions, which will include lower cost mission operations systems (MOS). Historically, one of the more expensive functions of MOS has been its interaction with the science community. Traditional MOS elements that this interaction have embraced include mission planning, science (and engineering) event conflict resolution, sequence optimization and integration, data production (e.g., assembly, enhancement, quality assurance, documentation, archive), and other science support services. In the past, the payoff from these efforts has been that use of mission resources has been highly optimized, constraining resources have been generally completely consumed, and data products have been accurate and well documented. But because these functions are expensive we are now challenged to reduce their cost while preserving the benefits. In this paper, we will consider ways of revising the traditional MOS approach that might save project resources while retaining a high degree of service to the Projects' customers. Pre-launch, science interaction can be made simplier by limiting numbers of instruments and by providing greater redundancy in mission plans. Post launch, possibilities include prioritizing data collection into a few categories, easing requirements on real-time of quick-look data delivery, and closer integration of scientists into the mission operation.

  15. A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1998-01-01

    This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.

  16. Two-level optimization of composite wing structures based on panel genetic optimization

    NASA Astrophysics Data System (ADS)

    Liu, Boyang

    The design of complex composite structures used in aerospace or automotive vehicles presents a major challenge in terms of computational cost. Discrete choices for ply thicknesses and ply angles leads to a combinatorial optimization problem that is too expensive to solve with presently available computational resources. We developed the following methodology for handling this problem for wing structural design: we used a two-level optimization approach with response-surface approximations to optimize panel failure loads for the upper-level wing optimization. We tailored efficient permutation genetic algorithms to the panel stacking sequence design on the lower level. We also developed approach for improving continuity of ply stacking sequences among adjacent panels. The decomposition approach led to a lower-level optimization of stacking sequence with a given number of plies in each orientation. An efficient permutation genetic algorithm (GA) was developed for handling this problem. We demonstrated through examples that the permutation GAs are more efficient for stacking sequence optimization than a standard GA. Repair strategies for standard GA and the permutation GAs for dealing with constraints were also developed. The repair strategies can significantly reduce computation costs for both standard GA and permutation GA. A two-level optimization procedure for composite wing design subject to strength and buckling constraints is presented. At wing-level design, continuous optimization of ply thicknesses with orientations of 0°, 90°, and +/-45° is performed to minimize weight. At the panel level, the number of plies of each orientation (rounded to integers) and inplane loads are specified, and a permutation genetic algorithm is used to optimize the stacking sequence. The process begins with many panel genetic optimizations for a range of loads and numbers of plies of each orientation. Next, a cubic polynomial response surface is fitted to the optimum buckling load. The resulting response surface is used for wing-level optimization. In general, complex composite structures consist of several laminates. A common problem in the design of such structures is that some plies in the adjacent laminates terminate in the boundary between the laminates. These discontinuities may cause stress concentrations and may increase manufacturing difficulty and cost. We developed measures of continuity of two adjacent laminates. We studied tradeoffs between weight and continuity through a simple composite wing design. Finally, we compared the two-level optimization to a single-level optimization based on flexural lamination parameters. The single-level optimization is efficient and feasible for a wing consisting of unstiffened panels.

  17. Learning With Mixed Hard/Soft Pointwise Constraints.

    PubMed

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  18. Scalable Heuristics for Planning, Placement and Sizing of Flexible AC Transmission System Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, Vladmir; Backhaus, Scott N.; Chertkov, Michael

    Aiming to relieve transmission grid congestion and improve or extend feasibility domain of the operations, we build optimization heuristics, generalizing standard AC Optimal Power Flow (OPF), for placement and sizing of Flexible Alternating Current Transmission System (FACTS) devices of the Series Compensation (SC) and Static VAR Compensation (SVC) type. One use of these devices is in resolving the case when the AC OPF solution does not exist because of congestion. Another application is developing a long-term investment strategy for placement and sizing of the SC and SVC devices to reduce operational cost and improve power system operation. SC and SVCmore » devices are represented by modification of the transmission line inductances and reactive power nodal corrections respectively. We find one placement and sizing of FACTs devices for multiple scenarios and optimal settings for each scenario simultaneously. Our solution of the nonlinear and nonconvex generalized AC-OPF consists of building a convergent sequence of convex optimizations containing only linear constraints and shows good computational scaling to larger systems. The approach is illustrated on single- and multi-scenario examples of the Matpower case-30 model.« less

  19. Impact of Airspace Charges on Transatlantic Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Ng, Hok K.; Linke, Florian; Chen, Neil Y.

    2015-01-01

    Aircraft flying over the airspace of different countries are subject to over-flight charges. These charges vary from country to country. Airspace charges, while necessary to support the communication, navigation and surveillance services, may lead to aircraft flying routes longer than wind-optimal routes and produce additional carbon dioxide and other gaseous emissions. This paper develops an optimal route between city pairs by modifying the cost function to include an airspace cost whenever an aircraft flies through a controlled airspace without landing or departing from that airspace. It is assumed that the aircraft will fly the trajectory at a constant cruise altitude and constant speed. The computationally efficient optimal trajectory is derived by solving a non-linear optimal control problem. The operational strategies investigated in this study for minimizing aircraft fuel burn and emissions include flying fuel-optimal routes and flying cost-optimal routes that may completely or partially reduce airspace charges en route. The results in this paper use traffic data for transatlantic flights during July 2012. The mean daily savings in over-flight charges, fuel cost and total operation cost during the period are 17.6 percent, 1.6 percent, and 2.4 percent respectively, along the cost- optimal trajectories. The transatlantic flights can potentially save $600,000 in fuel cost plus $360,000 in over-flight charges daily by flying the cost-optimal trajectories. In addition, the aircraft emissions can be potentially reduced by 2,070 metric tons each day. The airport pairs and airspace regions that have the highest potential impacts due to airspace charges are identified for possible reduction of fuel burn and aircraft emissions for the transatlantic flights. The results in the paper show that the impact of the variation in fuel price on the optimal routes is to reduce the difference between wind-optimal and cost-optimal routes as the fuel price increases. The additional fuel consumption is quantified using the 30 percent variation in fuel prices during March 2014 to March 2015.

  20. Electric Propulsion System Selection Process for Interplanetary Missions

    NASA Technical Reports Server (NTRS)

    Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul

    2008-01-01

    The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.

  1. Solution for a bipartite Euclidean traveling-salesman problem in one dimension

    NASA Astrophysics Data System (ADS)

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  2. Solution for a bipartite Euclidean traveling-salesman problem in one dimension.

    PubMed

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  3. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  4. Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Rendon, A.; Beck, J. C.; Lilge, Lothar

    2008-02-01

    Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.

  5. Optimal Coordination of Building Loads and Energy Storage for Power Grid and End User Services

    DOE PAGES

    Hao, He; Wu, Di; Lian, Jianming; ...

    2017-01-18

    Demand response and energy storage play a profound role in the smart grid. The focus of this study is to evaluate benefits of coordinating flexible loads and energy storage to provide power grid and end user services. We present a Generalized Battery Model (GBM) to describe the flexibility of building loads and energy storage. An optimization-based approach is proposed to characterize the parameters (power and energy limits) of the GBM for flexible building loads. We then develop optimal coordination algorithms to provide power grid and end user services such as energy arbitrage, frequency regulation, spinning reserve, as well as energymore » cost and demand charge reduction. Several case studies have been performed to demonstrate the efficacy of the GBM and coordination algorithms, and evaluate the benefits of using their flexibility for power grid and end user services. We show that optimal coordination yields significant cost savings and revenue. Moreover, the best option for power grid services is to provide energy arbitrage and frequency regulation. Finally and furthermore, when coordinating flexible loads with energy storage to provide end user services, it is recommended to consider demand charge in addition to time-of-use price in order to flatten the aggregate power profile.« less

  6. A general model for techno-economic analysis of CSP plants with thermochemical energy storage systems

    NASA Astrophysics Data System (ADS)

    Peng, Xinyue; Maravelias, Christos T.; Root, Thatcher W.

    2017-06-01

    Thermochemical energy storage (TCES), with high energy density and wide operating temperature range, presents a potential solution for CSP plant energy storage. We develop a general optimization based process model for CSP plants employing a wide range of TCES systems which allows us to assess the plant economic feasibility and energy efficiency. The proposed model is applied to a 100 MW CSP plant employing ammonia or methane TCES systems. The methane TCES system with underground gas storage appears to be the most promising option, achieving a 14% LCOE reduction over the current two-tank molten-salt CSP plants. For general TCES systems, gas storage is identified as the main cost driver, while the main energy driver is the compressor electricity consumption. The impacts of separation and different reaction parameters are also analyzed. This study demonstrates that the realization of TCES systems for CSP plants is contingent upon low storage cost and a reversible reaction with proper reaction properties.

  7. Econometrics of inventory holding and shortage costs: the case of refined gasoline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krane, S.D.

    1985-01-01

    This thesis estimates a model of a firm's optimal inventory and production behavior in order to investigate the link between the role of inventories in the business cycle and the microeconomic incentives for holding stocks of finished goods. The goal is to estimate a set of structural cost function parameters that can be used to infer the optimal cyclical response of inventories and production to shocks in demand. To avoid problems associated with the use of value based aggregate inventory data, an industry level physical unit data set for refined motor gasoline is examined. The Euler equations for a refiner'smore » multiperiod decision problem are estimated using restrictions imposed by the rational expectations hypothesis. The model also embodies the fact that, in most periods, the level of shortages will be zero, and even when positive, the shortages are not directly observable in the data set. These two concerns lead us to use a generalized method of moments estimation technique on a functional form that resembles the formulation of a Tobit problem. The estimation results are disappointing; the model and data yield coefficient estimates incongruous with the cost function interpretations of the structural parameters. These is only some superficial evidence that production smoothing is significant and that marginal inventory shortage costs increase at a faster rate than do marginal holding costs.« less

  8. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  9. Theoretical and experimental researches on the operating costs of a wastewater treatment plant

    NASA Astrophysics Data System (ADS)

    Panaitescu, M.; Panaitescu, F.-V.; Anton, I.-A.

    2015-11-01

    Purpose of the work: The total cost of a sewage plants is often determined by the present value method. All of the annual operating costs for each process are converted to the value of today's correspondence and added to the costs of investment for each process, which leads to getting the current net value. The operating costs of the sewage plants are subdivided, in general, in the premises of the investment and operating costs. The latter can be stable (normal operation and maintenance, the establishment of power) or variables (chemical and power sludge treatment and disposal, of effluent charges). For the purpose of evaluating the preliminary costs so that an installation can choose between different alternatives in an incipient phase of a project, can be used cost functions. In this paper will be calculated the operational cost to make several scenarios in order to optimize its. Total operational cost (fixed and variable) is dependent global parameters of wastewater treatment plant. Research and methodology: The wastewater treatment plant costs are subdivided in investment and operating costs. We can use different cost functions to estimate fixed and variable operating costs. In this study we have used the statistical formulas for cost functions. The method which was applied to study the impact of the influent characteristics on the costs is economic analysis. Optimization of plant design consist in firstly, to assess the ability of the smallest design to treat the maximum loading rates to a given effluent quality and, secondly, to compare the cost of the two alternatives for average and maximum loading rates. Results: In this paper we obtained the statistical values for the investment cost functions, operational fixed costs and operational variable costs for wastewater treatment plant and its graphical representations. All costs were compared to the net values. Finally we observe that it is more economical to build a larger plant, especially if maximum loading rates are reached. The actual target of operational management is to directly implement the presented cost functions in a software tool, in which the design of a plant and the simulation of its behaviour are evaluated simultaneously.

  10. Optimizations of geothermal cycle shell and tube exchangers of various configurations with variable fluid properties and site specific fouling. [SIZEHX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, W.L.; Pines, H.S.; Silvester, L.F.

    1978-03-01

    A new heat exchanger program, SIZEHX, is described. This program allows single step multiparameter cost optimizations on single phase or supercritical exchanger arrays with variable properties and arbitrary fouling for a multitude of matrix configurations and fluids. SIZEHX uses a simplified form of Tinker's method for characterization of shell side performance; the Starling modified BWR equation for thermodynamic properties of hydrocarbons; and transport properties developed by NBS. Results of four parameter cost optimizations on exchangers for specific geothermal applications are included. The relative mix of capital cost, pumping cost, and brine cost ($/Btu) is determined for geothermal exchangers illustrating themore » invariant nature of the optimal cost distribution for fixed unit costs.« less

  11. Virus purification by CsCl density gradient using general centrifugation.

    PubMed

    Nasukawa, Tadahiro; Uchiyama, Jumpei; Taharaguchi, Satoshi; Ota, Sumire; Ujihara, Takako; Matsuzaki, Shigenobu; Murakami, Hironobu; Mizukami, Keijirou; Sakaguchi, Masahiro

    2017-11-01

    Virus purification by cesium chloride (CsCl) density gradient, which generally requires an expensive ultracentrifuge, is an essential technique in virology. Here, we optimized virus purification by CsCl density gradient using general centrifugation (40,000 × g, 2 h, 4 °C), which showed almost the same purification ability as conventional CsCl density gradient ultracentrifugation (100,000 × g, 1 h, 4 °C) using phages S13' and φEF24C. Moreover, adenovirus strain JM1/1 was also successfully purified by this method. We suggest that general centrifugation can become a less costly alternative to ultracentrifugation for virus purification by CsCl densiy gradient and will thus encourage research in virology.

  12. Foraging optimally for home ranges

    USGS Publications Warehouse

    Mitchell, Michael S.; Powell, Roger A.

    2012-01-01

    Economic models predict behavior of animals based on the presumption that natural selection has shaped behaviors important to an animal's fitness to maximize benefits over costs. Economic analyses have shown that territories of animals are structured by trade-offs between benefits gained from resources and costs of defending them. Intuitively, home ranges should be similarly structured, but trade-offs are difficult to assess because there are no costs of defense, thus economic models of home-range behavior are rare. We present economic models that predict how home ranges can be efficient with respect to spatially distributed resources, discounted for travel costs, under 2 strategies of optimization, resource maximization and area minimization. We show how constraints such as competitors can influence structure of homes ranges through resource depression, ultimately structuring density of animals within a population and their distribution on a landscape. We present simulations based on these models to show how they can be generally predictive of home-range behavior and the mechanisms that structure the spatial distribution of animals. We also show how contiguous home ranges estimated statistically from location data can be misleading for animals that optimize home ranges on landscapes with patchily distributed resources. We conclude with a summary of how we applied our models to nonterritorial black bears (Ursus americanus) living in the mountains of North Carolina, where we found their home ranges were best predicted by an area-minimization strategy constrained by intraspecific competition within a social hierarchy. Economic models can provide strong inference about home-range behavior and the resources that structure home ranges by offering falsifiable, a priori hypotheses that can be tested with field observations.

  13. Decomposition method for zonal resource allocation problems in telecommunication networks

    NASA Astrophysics Data System (ADS)

    Konnov, I. V.; Kashuba, A. Yu

    2016-11-01

    We consider problems of optimal resource allocation in telecommunication networks. We first give an optimization formulation for the case where the network manager aims to distribute some homogeneous resource (bandwidth) among users of one region with quadratic charge and fee functions and present simple and efficient solution methods. Next, we consider a more general problem for a provider of a wireless communication network divided into zones (clusters) with common capacity constraints. We obtain a convex quadratic optimization problem involving capacity and balance constraints. By using the dual Lagrangian method with respect to the capacity constraint, we suggest to reduce the initial problem to a single-dimensional optimization problem, but calculation of the cost function value leads to independent solution of zonal problems, which coincide with the above single region problem. Some results of computational experiments confirm the applicability of the new methods.

  14. Particle swarm optimization and gravitational wave data analysis: Performance on a binary inspiral testbed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Yan; Mohanty, Soumya D.; Center for Gravitational Wave Astronomy, Department of Physics and Astronomy, University of Texas at Brownsville, 80 Fort Brown, Brownsville, Texas 78520

    2010-03-15

    The detection and estimation of gravitational wave signals belonging to a parameterized family of waveforms requires, in general, the numerical maximization of a data-dependent function of the signal parameters. Because of noise in the data, the function to be maximized is often highly multimodal with numerous local maxima. Searching for the global maximum then becomes computationally expensive, which in turn can limit the scientific scope of the search. Stochastic optimization is one possible approach to reducing computational costs in such applications. We report results from a first investigation of the particle swarm optimization method in this context. The method ismore » applied to a test bed motivated by the problem of detection and estimation of a binary inspiral signal. Our results show that particle swarm optimization works well in the presence of high multimodality, making it a viable candidate method for further applications in gravitational wave data analysis.« less

  15. Solvability of some partial functional integrodifferential equations with finite delay and optimal controls in Banach spaces.

    PubMed

    Ezzinbi, Khalil; Ndambomve, Patrice

    2016-01-01

    In this work, we consider the control system governed by some partial functional integrodifferential equations with finite delay in Banach spaces. We assume that the undelayed part admits a resolvent operator in the sense of Grimmer. Firstly, some suitable conditions are established to guarantee the existence and uniqueness of mild solutions for a broad class of partial functional integrodifferential infinite dimensional control systems. Secondly, it is proved that, under generally mild conditions of cost functional, the associated Lagrange problem has an optimal solution, and that for each optimal solution there is a minimizing sequence of the problem that converges to the optimal solution with respect to the trajectory, the control, and the functional in appropriate topologies. Our results extend and complement many other important results in the literature. Finally, a concrete example of application is given to illustrate the effectiveness of our main results.

  16. Cost considerations for long-term ecological monitoring

    USGS Publications Warehouse

    Caughlan, L.; Oakley, K.L.

    2001-01-01

    For an ecological monitoring program to be successful over the long-term, the perceived benefits of the information must justify the cost. Financial limitations will always restrict the scope of a monitoring program, hence the program's focus must be carefully prioritized. Clearly identifying the costs and benefits of a program will assist in this prioritization process, but this is easier said than done. Frequently, the true costs of monitoring are not recognized and are, therefore, underestimated. Benefits are rarely evaluated, because they are difficult to quantify. The intent of this review is to assist the designers and managers of long-term ecological monitoring programs by providing a general framework for building and operating a cost-effective program. Previous considerations of monitoring costs have focused on sampling design optimization. We present cost considerations of monitoring in a broader context. We explore monitoring costs, including both budgetary costs--what dollars are spent on--and economic costs, which include opportunity costs. Often, the largest portion of a monitoring program budget is spent on data collection, and other, critical aspects of the program, such as scientific oversight, training, data management, quality assurance, and reporting, are neglected. Recognizing and budgeting for all program costs is therefore a key factor in a program's longevity. The close relationship between statistical issues and cost is discussed, highlighting the importance of sampling design, replication and power, and comparing the costs of alternative designs through pilot studies and simulation modeling. A monitoring program development process that includes explicit checkpoints for considering costs is presented. The first checkpoint occur during the setting of objectives and during sampling design optimization. The last checkpoint occurs once the basic shape of the program is known, and the costs and benefits, or alternatively the cost-effectiveness, of each program element can be evaluated. Moving into the implementation phase without careful evaluation of costs and benefits is risky because if costs are later found to exceed benefits, the program will fail. The costs of development, which can be quite high, will have been largely wasted. Realistic expectations of costs and benefits will help ensure that monitoring programs survive the early, turbulent stages of development and the challenges posed by fluctuating budgets during implementation.

  17. Flight plan optimization

    NASA Astrophysics Data System (ADS)

    Dharmaseelan, Anoop; Adistambha, Keyne D.

    2015-05-01

    Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.

  18. Assessment of nursing management and utilization of nursing resources with the RAFAELA patient classification system--case study from the general wards of one central hospital.

    PubMed

    Rainio, Anna-Kaisa; Ohinmaa, Arto E

    2005-07-01

    RAFAELA is a new Finnish PCS, which is used in several University Hospitals and Central Hospitals and has aroused considerable interest in hospitals in Europe. The aim of the research is firstly to assess the feasibility of the RAFAELA Patient Classification System (PCS) in nursing staff management and, secondly, whether it can be seen as the transferring of nursing resources between wards according to the information received from nursing care intensity classification. The material was received from the Central Hospital's 12 general wards between 2000 and 2001. The RAFAELA PCS consists of three different measures: a system measuring patient care intensity, a system recording daily nursing resources, and a system measuring the optimal nursing care intensity/nurse situation. The data were analysed in proportion to the labour costs of nursing work and, from that, we calculated the employer's loss (a situation below the optimal level) and savings (a situation above the optimal level) per ward as both costs and the number of nurses. In 2000 the wards had on average 77 days below the optimal level and 106 days above it. In 2001 the wards had on average 71 days below the optimal level and 129 above it. Converting all these days to monetary and personnel resources the employer lost 307,745 or 9.84 nurses and saved 369,080 or 11.80 nurses in total in 2000. In 2001 the employer lost in total 242,143 or 7.58 nurses and saved 457,615 or 14.32 nurses. During the time period of the research nursing resources seemed not have been transferred between wards. RAFAELA PCS is applicable to the allocation of nursing resources but its possibilities have not been entirely used in the researched hospital. The management of nursing work should actively use the information received in nursing care intensity classification and plan and implement the transferring of nursing resources in order to ensure the quality of patient care. Information on which units resources should be allocated to is needed in the planning of staff resources of the whole hospital. More resources do not solve the managerial problem of the right allocation of resources. If resources are placed wrongly, the problems of daily staff management and cost control continue.

  19. Optimal Sizing of Energy Storage for Community Microgrids Considering Building Thermal Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Li, Zhi; Starke, Michael R.

    This paper proposes an optimization model for the optimal sizing of energy storage in community microgrids considering the building thermal dynamics and customer comfort preference. The proposed model minimizes the annualized cost of the community microgrid, including energy storage investment, purchased energy cost, demand charge, energy storage degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation. The decision variables are the power and energy capacity of invested energy storage. In particular, we assume the heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently by the microgrid central controller while maintainingmore » the indoor temperature in the comfort range set by customers. For this purpose, the detailed thermal dynamic characteristics of buildings have been integrated into the optimization model. Numerical simulation shows significant cost reduction by the proposed model. The impacts of various costs on the optimal solution are investigated by sensitivity analysis.« less

  20. Optimal use of novel agents in chronic lymphocytic leukemia.

    PubMed

    Smith, Mitchell R; Weiss, Robert F

    2018-05-07

    Novel agents are changing therapy for patients with CLL, but their optimal use remains unclear. We model the clinical situation in which CLL responds to therapy, but resistant clones, generally carrying del17p, progress and lead to relapse. Sub-clones of varying growth rates and treatment sensitivity affect predicted therapy outcomes. We explore effects of different approaches to starting novel agent in relation to bendamustine-rituximab induction therapy: at initiation of therapy, at the end of chemo-immunotherapy, at molecular relapse, or at clinical detection of relapse. The outcomes differ depending on the underlying clonal architecture, raising the concept that personalized approaches based on clinical evaluation of each patient's clonal architecture might optimize outcomes while minimizing toxicity and cost. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  2. Pharmacist-led management of chronic pain in primary care: costs and benefits in a pilot randomised controlled trial.

    PubMed

    Neilson, Aileen R; Bruhn, Hanne; Bond, Christine M; Elliott, Alison M; Smith, Blair H; Hannaford, Philip C; Holland, Richard; Lee, Amanda J; Watson, Margaret; Wright, David; McNamee, Paul

    2015-04-01

    To explore differences in mean costs (from a UK National Health Service perspective) and effects of pharmacist-led management of chronic pain in primary care evaluated in a pilot randomised controlled trial (RCT), and to estimate optimal sample size for a definitive RCT. Regression analysis of costs and effects, using intention-to-treat and expected value of sample information analysis (EVSI). Six general practices: Grampian (3); East Anglia (3). 125 patients with complete resource use and short form-six-dimension questionnaire (SF-6D) data at baseline, 3 months and 6 months. Patients were randomised to either pharmacist medication review with face-to-face pharmacist prescribing or pharmacist medication review with feedback to general practitioner or treatment as usual (TAU). Differences in mean total costs and effects measured as quality-adjusted life years (QALYs) at 6 months and EVSI for sample size calculation. Unadjusted total mean costs per patient were £452 for prescribing (SD: £466), £570 for review (SD: £527) and £668 for TAU (SD: £1333). After controlling for baseline costs, the adjusted mean cost differences per patient relative to TAU were £77 for prescribing (95% CI -82 to 237) and £54 for review (95% CI -103 to 212). Unadjusted mean QALYs were 0.3213 for prescribing (SD: 0.0659), 0.3161 for review (SD: 0.0684) and 0.3079 for TAU (SD: 0.0606). Relative to TAU, the adjusted mean differences were 0.0069 for prescribing (95% CI -0.0091 to 0.0229) and 0.0097 for review (95% CI -0.0054 to 0.0248). The EVSI suggested the optimal future trial size was between 460 and 690, and between 540 and 780 patients per arm using a threshold of £30,000 and £20,000 per QALY gained, respectively. Compared with TAU, pharmacist-led interventions for chronic pain appear more costly and provide similar QALYs. However, these estimates are imprecise due to the small size of the pilot trial. The EVSI indicates that a larger trial is necessary to obtain more precise estimates of differences in mean effects and costs between treatment groups. ISRCTN06131530. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  4. Optimal control of population recovery--the role of economic restoration threshold.

    PubMed

    Lampert, Adam; Hastings, Alan

    2014-01-01

    A variety of ecological systems around the world have been damaged in recent years, either by natural factors such as invasive species, storms and global change or by direct human activities such as overfishing and water pollution. Restoration of these systems to provide ecosystem services entails significant economic benefits. Thus, choosing how and when to restore in an optimal fashion is important, but has not been well studied. Here we examine a general model where population growth can be induced or accelerated by investing in active restoration. We show that the most cost-effective method to restore an ecosystem dictates investment until the population approaches an 'economic restoration threshold', a density above which the ecosystem should be left to recover naturally. Therefore, determining this threshold is a key general approach for guiding efficient restoration management, and we demonstrate how to calculate this threshold for both deterministic and stochastic ecosystems. © 2013 John Wiley & Sons Ltd/CNRS.

  5. A mixed integer linear programming model for operational planning of a biodiesel supply chain network from used cooking oil

    NASA Astrophysics Data System (ADS)

    Jonrinaldi, Hadiguna, Rika Ampuh; Salastino, Rades

    2017-11-01

    Environmental consciousness has paid many attention nowadays. It is not only about how to recycle, remanufacture or reuse used end products but it is also how to optimize the operations of the reverse system. A previous research has proposed a design of reverse supply chain of biodiesel network from used cooking oil. However, the research focused on the design of the supply chain strategy not the operations of the supply chain. It only decided how to design the structure of the supply chain in the next few years, and the process of each stage will be conducted in the supply chain system in general. The supply chain system has not considered operational policies to be conducted by the companies in the supply chain. Companies need a policy for each stage of the supply chain operations to be conducted so as to produce the optimal supply chain system, including how to use all the resources that have been designed in order to achieve the objectives of the supply chain system. Therefore, this paper proposes a model to optimize the operational planning of a biodiesel supply chain network from used cooking oil. A mixed integer linear programming is developed to model the operational planning of biodiesel supply chain in order to minimize the total operational cost of the supply chain. Based on the implementation of the model developed, the total operational cost of the biodiesel supply chain incurred by the system is less than the total operational cost of supply chain based on the previous research during seven days of operational planning about amount of 2,743,470.00 or 0.186%. Production costs contributed to 74.6 % of total operational cost and the cost of purchasing the used cooking oil contributed to 24.1 % of total operational cost. So, the system should pay more attention to these two aspects as changes in the value of these aspects will cause significant effects to the change in the total operational cost of the supply chain.

  6. Bone mineral density referral for dual-energy X-ray absorptiometry using quantitative ultrasound as a prescreening tool in postmenopausal women from the general population: a cost-effectiveness analysis.

    PubMed

    Marín, F; López-Bastida, J; Díez-Pérez, A; Sacristán, J A

    2004-03-01

    The aim of our study was to assess, from the perspective of the National Health Services in Spain, the cost-effectiveness of quantitative ultrasound (QUS) as a prescreen referral method for bone mineral density (BMD) assessment by dual-energy X-ray absorptiometry (DXA) in postmenopausal women of the general population. Using femoral neck DXA and heel QUS. We evaluated 267 consecutive postmenopausal women 65 years and older and attending primary care physician offices for any medical reason. Subjects were classified as osteoporotic or nonosteoporotic (normal or osteopenic) using the WHO definition for DXA. Effectiveness was assessed in terms of the sensitivity and specificity of the referral decisions based on the QUS measurement. Local costs were estimated from health services and actual resource used. Cost-effectiveness was evaluated in terms of the expected cost per true positive osteoporotic case detected. Baseline prevalence of osteoporosis evaluated by DXA was 55.8%. The sensitivity and specificity for the diagnosis of osteoporosis by QUS using the optimal cutoff thresholds for the estimated heel BMD T-score were 97% and 94%, respectively. The average cost per osteoporotic case detected based on DXA measurement alone was 23.85 euros. The average cost per osteoporotic case detected using QUS as a prescreen was 22.00 euros. The incremental cost-effectiveness of DXA versus QUS was 114.00 euros per true positive case detected. Our results suggest that screening for osteoporosis with QUS while applying strict cufoff values in postmenopausal women of the general population is not substantially more cost-effective than DXA alone for the diagnosis of osteoporosis. However, the screening strategy with QUS may be an option in those circumstances where the diagnosis of osteoporosis is deficient because of the difficulty in accessing DXA equipment.

  7. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  8. Investigation of storage system designs and techniques for optimizing energy conservation in integrated utility systems. Volume 3: (Assessment of technical and cost characteristics of candidate IUS energy storage devices)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Six energy storage technologies (inertial, superconducting magnetic, electrochemical, chemical, compressed air, and thermal) were assessed and evaluated for specific applicability to the IUS. To provide a perspective for the individual storage technologies, a brief outline of the general nature of energy storage and its significance to the user is presented.

  9. Dynamic Network Formation Using Ant Colony Optimization

    DTIC Science & Technology

    2009-03-01

    backhauls, VRP with pick-up and delivery, VRP with satellite facilities, and VRP with time windows (Murata & Itai , 2005). The general vehicle...given route is only visited once. The objective of the basic problem is to minimize a total cost as follows (Murata & Itai , 2005): M m mc 1 min...Problem based on Ant Colony System. Second Internation Workshop on Freight Transportation and Logistics. Palermo, Italy. Murata, T., & Itai , R. (2005

  10. Modeling energetic and theoretical costs of thermoregulatory strategy.

    PubMed

    Alford, John G; Lutterschmidt, William I

    2012-01-01

    Poikilothermic ectotherms have evolved behaviours that help them maintain or regulate their body temperature (T (b)) around a preferred or 'set point' temperature (T (set)). Thermoregulatory behaviors may range from body positioning to optimize heat gain to shuttling among preferred microhabitats to find appropriate environmental temperatures. We have modelled movement patterns between an active and non-active shuttling behaviour within a habitat (as a biased random walk) to investigate the potential cost of two thermoregulatory strategies. Generally, small-bodied ectotherms actively thermoregulate while large-bodied ectotherms may passively thermoconform to their environment. We were interested in the potential energetic cost for a large-bodied ectotherm if it were forced to actively thermoregulate rather than thermoconform. We therefore modelled movements and the resulting and comparative energetic costs in precisely maintaining a T (set) for a small-bodied versus large-bodied ectotherm to study and evaluate the thermoregulatory strategy.

  11. The Integration of Production-Distribution on Newspapers Supply Chain for Cost Minimization using Analytic Models: Case Study

    NASA Astrophysics Data System (ADS)

    Febriana Aqidawati, Era; Sutopo, Wahyudi; Hisjam, Muh.

    2018-03-01

    Newspapers are products with special characteristics which are perishable, have a shorter range of time between the production and distribution, zero inventory, and decreasing sales value along with increasing in time. Generally, the problem of production and distribution in the paper supply chain is the integration of production planning and distribution to minimize the total cost. The approach used in this article to solve the problem is using an analytical model. In this article, several parameters and constraints have been considered in the calculation of the total cost of the integration of production and distribution of newspapers during the determined time horizon. This model can be used by production and marketing managers as decision support in determining the optimal quantity of production and distribution in order to obtain minimum cost so that company's competitiveness level can be increased.

  12. Optimization of the Water Volume in the Buckets of Pico Hydro Overshot Waterwheel by Analytical Method

    NASA Astrophysics Data System (ADS)

    Budiarso; Adanta, Dendy; Warjito; Siswantara, A. I.; Saputra, Pradhana; Dianofitra, Reza

    2018-03-01

    Rapid economic and population growth in Indonesia lead to increased energy consumption, including electricity needs. Pico hydro is considered as the right solution because the cost of investment and operational cost are fairly low. Additionally, Indonesia has many remote areas with high hydro-energy potential. The overshot waterwheel is one of technology that is suitable to be applied in remote areas due to ease of operation and maintenance. This study attempts to optimize bucket dimensions with the available conditions. In addition, the optimization also has a good impact on the amount of generated power because all available energy is utilized maximally. Analytical method is used to evaluate the volume of water contained in bucket overshot waterwheel. In general, there are two stages performed. First, calculation of the volume of water contained in each active bucket is done. If the amount total of water contained is less than the available discharge in active bucket, recalculation at the width of the wheel is done. Second, calculation of the torque of each active bucket is done to determine the power output. As the result, the mechanical power generated from the waterwheel is 305 Watts with the efficiency value of 28%.

  13. Spatial optimization for decentralized non-potable water reuse

    NASA Astrophysics Data System (ADS)

    Kavvada, Olga; Nelson, Kara L.; Horvath, Arpad

    2018-06-01

    Decentralization has the potential to reduce the scale of the piped distribution network needed to enable non-potable water reuse (NPR) in urban areas by producing recycled water closer to its point of use. However, tradeoffs exist between the economies of scale of treatment facilities and the size of the conveyance infrastructure, including energy for upgradient distribution of recycled water. To adequately capture the impacts from distribution pipes and pumping requirements, site-specific conditions must be accounted for. In this study, a generalized framework (a heuristic modeling approach using geospatial algorithms) is developed that estimates the financial cost, the energy use, and the greenhouse gas emissions associated with NPR (for toilet flushing) as a function of scale of treatment and conveyance networks with the goal of determining the optimal degree of decentralization. A decision-support platform is developed to assess and visualize NPR system designs considering topography, economies of scale, and building size. The platform can be used for scenario development to explore the optimal system size based on the layout of current or new buildings. The model also promotes technology innovation by facilitating the systems-level comparison of options to lower costs, improve energy efficiency, and lower greenhouse gas emissions.

  14. Political Orientation Predicts Credulity Regarding Putative Hazards.

    PubMed

    Fessler, Daniel M T; Pisor, Anne C; Holbrook, Colin

    2017-05-01

    To benefit from information provided by other people, people must be somewhat credulous. However, credulity entails risks. The optimal level of credulity depends on the relative costs of believing misinformation and failing to attend to accurate information. When information concerns hazards, erroneous incredulity is often more costly than erroneous credulity, given that disregarding accurate warnings is more harmful than adopting unnecessary precautions. Because no equivalent asymmetry exists for information concerning benefits, people should generally be more credulous of hazard information than of benefit information. This adaptive negatively biased credulity is linked to negativity bias in general and is more prominent among people who believe the world to be more dangerous. Because both threat sensitivity and beliefs about the dangerousness of the world differ between conservatives and liberals, we predicted that conservatism would positively correlate with negatively biased credulity. Two online studies of Americans supported this prediction, potentially illuminating how politicians' alarmist claims affect different portions of the electorate.

  15. Cost and fuel consumption per nautical mile for two engine jet transports using OPTIM and TRAGEN

    NASA Technical Reports Server (NTRS)

    Wiggs, J. F.

    1982-01-01

    The cost and fuel consumption per nautical mile for two engine jet transports are computed using OPTIM and TRAGEN. The savings in fuel and direct operating costs per nautical mile for each of the different types of optimal trajectories over a standard profile are shown.

  16. Neural-network-based online HJB solution for optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems.

    PubMed

    Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong

    2014-12-01

    In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.

  17. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  18. Cost Scaling of a Real-World Exhaust Waste Heat Recovery Thermoelectric Generator: A Deeper Dive

    NASA Astrophysics Data System (ADS)

    Hendricks, Terry J.; Yee, Shannon; LeBlanc, Saniya

    2016-03-01

    Cost is equally important to power density or efficiency for the adoption of waste heat recovery thermoelectric generators (TEG) in many transportation and industrial energy recovery applications. In many cases, the system design that minimizes cost (e.g., the /W value) can be very different than the design that maximizes the system's efficiency or power density, and it is important to understand the relationship between those designs to optimize TEG performance-cost compromises. Expanding on recent cost analysis work and using more detailed system modeling, an enhanced cost scaling analysis of a waste heat recovery TEG with more detailed, coupled treatment of the heat exchangers has been performed. In this analysis, the effect of the heat lost to the environment and updated relationships between the hot-side and cold-side conductances that maximize power output are considered. This coupled thermal and thermoelectric (TE) treatment of the exhaust waste heat recovery TEG yields modified cost scaling and design optimization equations, which are now strongly dependent on the heat leakage fraction, exhaust mass flow rate, and heat exchanger effectiveness. This work shows that heat exchanger costs most often dominate the overall TE system costs, that it is extremely difficult to escape this regime, and in order to achieve TE system costs of 1/W it is necessary to achieve heat exchanger costs of 1/(W/K). Minimum TE system costs per watt generally coincide with maximum power points, but preferred TE design regimes are identified where there is little cost penalty for moving into regions of higher efficiency and slightly lower power outputs. These regimes are closely tied to previously identified low cost design regimes. This work shows that the optimum fill factor F opt minimizing system costs decreases as heat losses increase, and increases as exhaust mass flow rate and heat exchanger effectiveness increase. These findings have profound implications on the design and operation of various TE waste heat recovery systems. This work highlights the importance of heat exchanger costs on the overall TEG system costs, quantifies the possible TEG performance-cost domain space based on heat exchanger effects, and provides a focus for future system research and development efforts.

  19. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  20. Implications of optimization cost for balancing exploration and exploitation in global search and for experimental optimization

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Anirban

    Global optimization based on expensive and time consuming simulations or experiments usually cannot be carried out to convergence, but must be stopped because of time constraints, or because the cost of the additional function evaluations exceeds the benefits of improving the objective(s). This dissertation sets to explore the implications of such budget and time constraints on the balance between exploration and exploitation and the decision of when to stop. Three different aspects are considered in terms of their effects on the balance between exploration and exploitation: 1) history of optimization, 2) fixed evaluation budget, and 3) cost as a part of objective function. To this end, this research develops modifications to the surrogate-based optimization technique, Efficient Global Optimization algorithm, that controls better the balance between exploration and exploitation, and stopping criteria facilitated by these modifications. Then the focus shifts to examining experimental optimization, which shares the issues of cost and time constraints. Through a study on optimization of thrust and power for a small flapping wing for micro air vehicles, important differences and similarities between experimental and simulation-based optimization are identified. The most important difference is that reduction of noise in experiments becomes a major time and cost issue, and a second difference is that parallelism as a way to cut cost is more challenging. The experimental optimization reveals the tendency of the surrogate to display optimistic bias near the surrogate optimum, and this tendency is then verified to also occur in simulation based optimization.

  1. On process optimization considering LCA methodology.

    PubMed

    Pieragostini, Carla; Mussati, Miguel C; Aguirre, Pío

    2012-04-15

    The goal of this work is to research the state-of-the-art in process optimization techniques and tools based on LCA, focused in the process engineering field. A collection of methods, approaches, applications, specific software packages, and insights regarding experiences and progress made in applying the LCA methodology coupled to optimization frameworks is provided, and general trends are identified. The "cradle-to-gate" concept to define the system boundaries is the most used approach in practice, instead of the "cradle-to-grave" approach. Normally, the relationship between inventory data and impact category indicators is linearly expressed by the characterization factors; then, synergic effects of the contaminants are neglected. Among the LCIA methods, the eco-indicator 99, which is based on the endpoint category and the panel method, is the most used in practice. A single environmental impact function, resulting from the aggregation of environmental impacts, is formulated as the environmental objective in most analyzed cases. SimaPro is the most used software for LCA applications in literature analyzed. The multi-objective optimization is the most used approach for dealing with this kind of problems, where the ε-constraint method for generating the Pareto set is the most applied technique. However, a renewed interest in formulating a single economic objective function in optimization frameworks can be observed, favored by the development of life cycle cost software and progress made in assessing costs of environmental externalities. Finally, a trend to deal with multi-period scenarios into integrated LCA-optimization frameworks can be distinguished providing more accurate results upon data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Multi-objective and Perishable Fuzzy Inventory Models Having Weibull Life-time With Time Dependent Demand, Demand Dependent Production and Time Varying Holding Cost: A Possibility/Necessity Approach

    NASA Astrophysics Data System (ADS)

    Pathak, Savita; Mondal, Seema Sarkar

    2010-10-01

    A multi-objective inventory model of deteriorating item has been developed with Weibull rate of decay, time dependent demand, demand dependent production, time varying holding cost allowing shortages in fuzzy environments for non- integrated and integrated businesses. Here objective is to maximize the profit from different deteriorating items with space constraint. The impreciseness of inventory parameters and goals for non-integrated business has been expressed by linear membership functions. The compromised solutions are obtained by different fuzzy optimization methods. To incorporate the relative importance of the objectives, the different cardinal weights crisp/fuzzy have been assigned. The models are illustrated with numerical examples and results of models with crisp/fuzzy weights are compared. The result for the model assuming them to be integrated business is obtained by using Generalized Reduced Gradient Method (GRG). The fuzzy integrated model with imprecise inventory cost is formulated to optimize the possibility necessity measure of fuzzy goal of the objective function by using credibility measure of fuzzy event by taking fuzzy expectation. The results of crisp/fuzzy integrated model are illustrated with numerical examples and results are compared.

  3. Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization

    DOE PAGES

    Malawski, Maciej; Figiela, Kamil; Bubak, Marian; ...

    2015-01-01

    This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL) and allows us to minimize themore » cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.« less

  4. Cost-effective river rehabilitation planning: optimizing for morphological benefits at large spatial scales.

    PubMed

    Langhans, Simone D; Hermoso, Virgilio; Linke, Simon; Bunn, Stuart E; Possingham, Hugh P

    2014-01-01

    River rehabilitation aims to protect biodiversity or restore key ecosystem services but the success rate is often low. This is seldom because of insufficient funding for rehabilitation works but because trade-offs between costs and ecological benefits of management actions are rarely incorporated in the planning, and because monitoring is often inadequate for managers to learn by doing. In this study, we demonstrate a new approach to plan cost-effective river rehabilitation at large scales. The framework is based on the use of cost functions (relationship between costs of rehabilitation and the expected ecological benefit) to optimize the spatial allocation of rehabilitation actions needed to achieve given rehabilitation goals (in our case established by the Swiss water act). To demonstrate the approach with a simple example, we link costs of the three types of management actions that are most commonly used in Switzerland (culvert removal, widening of one riverside buffer and widening of both riversides) to the improvement in riparian zone quality. We then use Marxan, a widely applied conservation planning software, to identify priority areas to implement these rehabilitation measures in two neighbouring Swiss cantons (Aargau, AG and Zürich, ZH). The best rehabilitation plans identified for the two cantons met all the targets (i.e. restoring different types of morphological deficits with different actions) rehabilitating 80,786 m (AG) and 106,036 m (ZH) of the river network at a total cost of 106.1 Million CHF (AG) and 129.3 Million CH (ZH). The best rehabilitation plan for the canton of AG consisted of more and better connected sub-catchments that were generally less expensive, compared to its neighbouring canton. The framework developed in this study can be used to inform river managers how and where best to spend their rehabilitation budget for a given set of actions, ensures the cost-effective achievement of desired rehabilitation outcomes, and helps towards estimating total costs of long-term rehabilitation activities. Rehabilitation plans ready to be implemented may be based on additional aspects to the ones considered here, e.g., specific cost functions for rural and urban areas and/or for large and small rivers, which can simply be added to our approach. Optimizing investments in this way will ultimately increase the likelihood of on-ground success of rehabilitation activities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Type of fitness cost influences the rate of evolution of resistance to transgenic Bt crops.

    PubMed

    Hackett, Sean C; Bonsall, Michael B

    2016-10-01

    The evolution of resistance to pesticides by insect pests is a significant challenge for sustainable agriculture. For transgenic crops expressing Bacillus thuringiensis (Bt), crystalline (Cry) toxins resistance evolution may be delayed by the high-dose/refuge strategy in which a non-toxic refuge is planted to promote the survival of susceptible insects. The high-dose/refuge strategy may interact with fitness costs associated with resistance alleles to further delay resistance. However, while a diverse range of fitness costs are reported in the field, they are typically represented as a fixed reduction in survival or viability which is insensitive to ecological conditions such as competition. Furthermore, the potential dynamic consequences of restricting susceptible insects to a refuge which represents only a fraction of the available space have rarely been considered.We present a generalized discrete time model which utilizes dynamic programming methods to derive the optimal management decisions for the control of a theoretical insect pest population exposed to Bt crops. We consider three genotypes (susceptible homozygotes, resistant homozygotes and heterozygotes) and implement fitness costs of resistance to Bt toxins as either a decrease in the relative competitive ability of resistant insects or as a penalty on fecundity. Model analysis is repeated and contrasted for two types of density dependence: uniform density dependence which operates equally across the landscape and heterogeneous density dependence where the intensity of competition scales inversely with patch size and is determined separately for the refuge and Bt crop.When the planting of Bt is decided optimally, fitness costs to fecundity allow for the planting of larger areas of Bt crops than equivalent fitness costs that reduce the competitive ability of resistant insects.Heterogeneous competition only influenced model predictions when the proportional area of Bt planted in each season was decided optimally and resistance was not recessive. Synthesis and applications . The high-dose/refuge strategy alone is insufficient to preserve susceptibility to transgenic Bacillus thuringiensis (Bt) crops in the long term when constraints upon the evolution of resistance are not insurmountable. Fitness costs may enhance the delaying effect of the refuge, but the extent to which they do so depends upon how the cost is realized biologically. Fitness costs which apply independently of other variables may be more beneficial to resistance management than costs which are only visible to selection under a limited range of ecological conditions.

  6. Advanced Structural Optimization Under Consideration of Cost Tracking

    NASA Astrophysics Data System (ADS)

    Zell, D.; Link, T.; Bickelmaier, S.; Albinger, J.; Weikert, S.; Cremaschi, F.; Wiegand, A.

    2014-06-01

    In order to improve the design process of launcher configurations in the early development phase, the software Multidisciplinary Optimization (MDO) was developed. The tool combines different efficient software tools such as Optimal Design Investigations (ODIN) for structural optimizations, Aerospace Trajectory Optimization Software (ASTOS) for trajectory and vehicle design optimization for a defined payload and mission.The present paper focuses to the integration and validation of ODIN. ODIN enables the user to optimize typical axis-symmetric structures by means of sizing the stiffening designs concerning strength and stability while minimizing the structural mass. In addition a fully automatic finite element model (FEM) generator module creates ready-to-run FEM models of a complete stage or launcher assembly.Cost tracking respectively future improvements concerning cost optimization are indicated.

  7. Capacity planning of link restorable optical networks under dynamic change of traffic

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2005-11-01

    Future backbone networks shall require full-survivability and support dynamic changes of traffic demands. The Generalized Survivable Networks (GSN) was proposed to meet these challenges. GSN is fully-survivable under dynamic traffic demand changes, so it offers a practical and guaranteed characterization framework for ASTN / ASON survivable network planning and bandwidth-on-demand resource allocation 4. The basic idea of GSN is to incorporate the non-blocking network concept into the survivable network models. In GSN, each network node must specify its I/O capacity bound which is taken as constraints for any allowable traffic demand matrix. In this paper, we consider the following generic GSN network design problem: Given the I/O bounds of each network node, find a routing scheme (and the corresponding rerouting scheme under failure) and the link capacity assignment (both working and spare) which minimize the cost, such that any traffic matrix consistent with the given I/O bounds can be feasibly routed and it is single-fault tolerant under the link restoration scheme. We first show how the initial, infeasible formal mixed integer programming formulation can be transformed into a more feasible problem using the duality transformation of the linear program. Then we show how the problem can be simplified using the Lagrangian Relaxation approach. Previous work has outlined a two-phase approach for solving this problem where the first phase optimizes the working capacity assignment and the second phase optimizes the spare capacity assignment. In this paper, we present a jointly optimized framework for dimensioning the survivable optical network with the GSN model. Experiment results show that the jointly optimized GSN can bring about on average of 3.8% cost savings when compared with the separate, two-phase approach. Finally, we perform a cost comparison and show that GSN can be deployed with a reasonable cost.

  8. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  9. A comparative study between control strategies for a solar sailcraft in an Earth-Mars transfer

    NASA Astrophysics Data System (ADS)

    Mainenti-Lopes, I.; Souza, L. C. Gadelha; De Sousa, Fabiano. L.

    2016-10-01

    The goal of this work was a comparative study of solar sail trajectory optimization using different control strategies. Solar sailcraft is propulsion system with great interest in space engineering, since it uses solar radiation to propulsion. So there is no need for propellant to be used, thus it can remains active throughout the entire transfer maneuver. This type of propulsion system opens the possibility to reduce the cost of exploration missions in the solar system. In its simplest configuration, a Flat Solar Sail (FSS) consists of a large and thin structure generally composed by a film fixed to flexible rods. The performance of these vehicles depends largely on the sails attitude relative to the Sun. Using a FSS as propulsion, an Earth-Mars transfer optimization problem was tackled by the algorithms GEOreal1 and GEOreal2 (Generalized Extremal Optimization with real codification). Those algorithms are Evolutionary Algorithms (AE) based on the theory of Self-Organized Criticality. They were used to optimize the FSS attitude angle so it could reach Mars orbit in minimum time. It was considered that the FSS could perform up to ten attitude maneuvers during orbital transfer. Moreover, the time between maneuvers can be different. So, the algorithms had to optimize an objective function with 20 design variables. The results obtained in this work were compared with previously results that considered constant values of time between maneuvers.

  10. The marriage problem and the fate of bachelors

    NASA Astrophysics Data System (ADS)

    Nieuwenhuizen, Th. M.

    In the marriage problem, a variant of the bi-parted matching problem, each member has a “wish-list” expressing his/her preference for all possible partners; this list consists of random, positive real numbers drawn from a certain distribution. One searches the lowest cost for the society, at the risk of breaking up pairs in the course of time. Minimization of a global cost function (Hamiltonian) is performed with statistical mechanics techniques at a finite fictitious temperature. The problem is generalized to include bachelors, needed in particular when the groups have different size, and polygamy. Exact solutions are found for the optimal solution ( T=0). The entropy is found to vanish quadratically in T. Also, other evidence is found that the replica symmetric solution is exact, implying at most a polynomial degeneracy of the optimal solution. Whether bachelors occur or not, depends not only on their intrinsic qualities, or lack thereof, but also on global aspects of the chance for pair formation in society.

  11. Management of invading pathogens should be informed by epidemiology rather than administrative boundaries.

    PubMed

    Thompson, Robin N; Cobb, Richard C; Gilligan, Christopher A; Cunniffe, Nik J

    2016-03-24

    Plant and animal disease outbreaks have significant ecological and economic impacts. The spatial extent of control is often informed solely by administrative geography - for example, quarantine of an entire county or state once an invading disease is detected - with little regard for pathogen epidemiology. We present a stochastic model for the spread of a plant pathogen that couples spread in the natural environment and transmission via the nursery trade, and use it to illustrate that control deployed according to administrative boundaries is almost always sub-optimal. We use sudden oak death (caused by Phytophthora ramorum ) in mixed forests in California as motivation for our study, since the decision as to whether or not to deploy plant trade quarantine is currently undertaken on a county-by-county basis for that system. However, our key conclusion is applicable more generally: basing management of any disease entirely upon administrative borders does not balance the cost of control with the possible economic and ecological costs of further spread in the optimal fashion.

  12. Primary prevention of cardiovascular diseases: a cost study in family practices.

    PubMed

    de Bekker-Grob, Esther W; van Dulmen, Sandra; van den Berg, Matthijs; Verheij, Robert A; Slobbe, Laurentius C J

    2011-07-06

    Considering the scarcity of health care resources and the high costs associated with cardiovascular diseases, we investigated the spending on cardiovascular primary preventive activities and the prescribing behaviour of primary preventive cardiovascular medication (PPCM) in Dutch family practices (FPs). A mixed methods design was used, which consisted of a questionnaire (n = 80 FPs), video recordings of hypertension- or cholesterol-related general practitioner visits (n = 56), and the database of Netherlands Information Network of General Practice (n = 45 FPs; n = 157,137 patients). The questionnaire and video recordings were used to determine the average frequency and time spent on cardiovascular primary preventive activities per FP respectively. Taking into account the annual income and full time equivalents of general practitioners, health care assistants, and practice nurses as well as the practice costs, the total spending on cardiovascular primary preventive activities in Dutch FPs was calculated. The database of Netherlands Information Network of General Practice was used to determine the prescribing behaviour in Dutch FPs by conducting multilevel regression models and adjusting for patient and practice characteristics. Total expenditure on cardiovascular primary preventive activities in FPs in 2009 was €38.8 million (€2.35 per capita), of which 47% was spent on blood pressure measurements, 26% on cardiovascular risk profiling, and 11% on lifestyle counselling. Fifteen percent (€11 per capita) of all cardiovascular medication prescribed in FPs was a PPCM. FPs differed greatly on prescription of PPCM (odds ratio of 3.1). Total costs of cardiovascular primary preventive activities in FPs such as blood pressure measurements and lifestyle counselling are relatively low compared to the costs of PPCM. There is considerable heterogeneity in prescribing behaviour of PPCM between FPs. Further research is needed to determine whether such large differences in prescription rates are justified. Striving for an optimal use of cardiovascular primary preventive activities might lead to similar health outcomes, but may achieve important cost savings.

  13. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  14. An optimization model for collection, haul, transfer, treatment and disposal of infectious medical waste: Application to a Greek region.

    PubMed

    Mantzaras, Gerasimos; Voudrias, Evangelos A

    2017-11-01

    The objective of this work was to develop an optimization model to minimize the cost of a collection, haul, transfer, treatment and disposal system for infectious medical waste (IMW). The model calculates the optimum locations of the treatment facilities and transfer stations, their design capacities (t/d), the number and capacities of all waste collection, transport and transfer vehicles and their optimum transport path and the minimum IMW management system cost. Waste production nodes (hospitals, healthcare centers, peripheral health offices, private clinics and physicians in private practice) and their IMW production rates were specified and used as model inputs. The candidate locations of the treatment facilities, transfer stations and sanitary landfills were designated, using a GIS-based methodology. Specifically, Mapinfo software with exclusion criteria for non-appropriate areas was used for siting candidate locations for the construction of the treatment plant and calculating the distance and travel time of all possible vehicle routes. The objective function was a non-linear equation, which minimized the total collection, transport, treatment and disposal cost. Total cost comprised capital and operation costs for: (1) treatment plant, (2) waste transfer stations, (3) waste transport and transfer vehicles and (4) waste collection bins and hospital boxes. Binary variables were used to decide whether a treatment plant and/or a transfer station should be constructed and whether a collection route between two or more nodes should be followed. Microsoft excel software was used as installation platform of the optimization model. For the execution of the optimization routine, two completely different software were used and the results were compared, thus, resulting in higher reliability and validity of the results. The first software was Evolver, which is based on the use of genetic algorithms. The second one was Crystal Ball, which is based on Monte Carlo simulation. The model was applied to the Region of East Macedonia - Thrace in Greece. The optimum solution resulted in one treatment plant located in the sanitary landfill area of Chrysoupolis, required no transfer stations and had a total management cost of 38,800 €/month or 809 €/t. If a treatment plant is sited in the most eastern part of the Region, i.e., the industrial area of Alexandroupolis, the optimum solution would result in a transfer station of 23 m 3 , located near Kavala General Hospital, and a total cost of 39,800 €/month or 831 €/t. A sensitivity analysis was conducted and two alternative scenarios were optimized. In the first scenario, a 15% rise in fuel cost and in the second scenario a 25% rise in IMW production were considered. At the end, a cost calculation in €/t/km for every type of vehicle used for haul and transfer was conducted. Also, the cost of the whole system was itemized and calculated in €/t/km and €/t. The results showed that the higher percentage of the total cost was due to the construction of the treatment plant. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Derivation of a Levelized Cost of Coating (LCOC) metric for evaluation of solar selective absorber materials

    DOE PAGES

    Ho, C. K.; Pacheco, J. E.

    2015-06-05

    A new metric, the Levelized Cost of Coating (LCOC), is derived in this paper to evaluate and compare alternative solar selective absorber coatings against a baseline coating (Pyromark 2500). In contrast to previous metrics that focused only on the optical performance of the coating, the LCOC includes costs, durability, and optical performance for more comprehensive comparisons among candidate materials. The LCOC is defined as the annualized marginal cost of the coating to produce a baseline annual thermal energy production. Costs include the cost of materials and labor for initial application and reapplication of the coating, as well as the costmore » of additional or fewer heliostats to yield the same annual thermal energy production as the baseline coating. Results show that important factors impacting the LCOC include the initial solar absorptance, thermal emittance, reapplication interval, degradation rate, reapplication cost, and downtime during reapplication. The LCOC can also be used to determine the optimal reapplication interval to minimize the levelized cost of energy production. As a result, similar methods can be applied more generally to determine the levelized cost of component for other applications and systems.« less

  16. Patient- and family-centered care coordination: a framework for integrating care for children and youth across multiple systems.

    PubMed

    2014-05-01

    Understanding a care coordination framework, its functions, and its effects on children and families is critical for patients and families themselves, as well as for pediatricians, pediatric medical subspecialists/surgical specialists, and anyone providing services to children and families. Care coordination is an essential element of a transformed American health care delivery system that emphasizes optimal quality and cost outcomes, addresses family-centered care, and calls for partnership across various settings and communities. High-quality, cost-effective health care requires that the delivery system include elements for the provision of services supporting the coordination of care across settings and professionals. This requirement of supporting coordination of care is generally true for health systems providing care for all children and youth but especially for those with special health care needs. At the foundation of an efficient and effective system of care delivery is the patient-/family-centered medical home. From its inception, the medical home has had care coordination as a core element. In general, optimal outcomes for children and youth, especially those with special health care needs, require interfacing among multiple care systems and individuals, including the following: medical, social, and behavioral professionals; the educational system; payers; medical equipment providers; home care agencies; advocacy groups; needed supportive therapies/services; and families. Coordination of care across settings permits an integration of services that is centered on the comprehensive needs of the patient and family, leading to decreased health care costs, reduction in fragmented care, and improvement in the patient/family experience of care. Copyright © 2014 by the American Academy of Pediatrics.

  17. Optimal shielding design for minimum materials cost or mass

    DOE PAGES

    Woolley, Robert D.

    2015-12-02

    The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less

  18. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  19. Community Microgrid Scheduling Considering Network Operational Constraints and Building Thermal Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu

    Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less

  20. Community Microgrid Scheduling Considering Network Operational Constraints and Building Thermal Dynamics

    DOE PAGES

    Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...

    2017-10-10

    Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less

  1. Future ultra-speed tube-flight

    NASA Astrophysics Data System (ADS)

    Salter, Robert M.

    1994-05-01

    Future long-link, ultra-speed, surface transport systems will require electromagnetically (EM) driven and restrained vehicles operating under reduced-atmosphere in very straight tubes. Such tube-flight trains will be safe, energy conservative, pollution-free, and in a protected environment. Hypersonic (and even hyperballistic) speeds are theoretically achievable. Ultimate system choices will represent tradeoffs between amoritized capital costs (ACC) and operating costs. For example, long coasting links might employ aerodynamic lift coupled with EM restraint and drag make-up. Optimized, combined EM lift, and thrust vectors could reduce energy costs but at increased ACC. (Repulsive levitation can produce lift-over-drag l/d ratios a decade greater than aerodynamic), Alternatively, vehicle-emanated, induced-mirror fields in a conducting (aluminum sheet) road bed could reduce ACC but at substantial energy costs. Ultra-speed tube flight will demand fast-acting, high-precision sensors and computerized magnetic shimming. This same control system can maintain a magnetic 'guide way' invariant in inertial space with inertial detectors imbedded in tube structures to sense and correct for earth tremors. Ultra-speed tube flight can complete with aircraft for transit time and can provide even greater passenger convenience by single-model connections with local subways and feeder lines. Although cargo transport generally will not need to be performed at ultra speeds, such speeds may well be desirable for high throughput to optimize channel costs. Thus, a large and expensive pipeline might be replaced with small EM-driven pallets at high speeds.

  2. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    NASA Astrophysics Data System (ADS)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  3. Future ultra-speed tube-flight

    NASA Technical Reports Server (NTRS)

    Salter, Robert M.

    1994-01-01

    Future long-link, ultra-speed, surface transport systems will require electromagnetically (EM) driven and restrained vehicles operating under reduced-atmosphere in very straight tubes. Such tube-flight trains will be safe, energy conservative, pollution-free, and in a protected environment. Hypersonic (and even hyperballistic) speeds are theoretically achievable. Ultimate system choices will represent tradeoffs between amoritized capital costs (ACC) and operating costs. For example, long coasting links might employ aerodynamic lift coupled with EM restraint and drag make-up. Optimized, combined EM lift, and thrust vectors could reduce energy costs but at increased ACC. (Repulsive levitation can produce lift-over-drag l/d ratios a decade greater than aerodynamic), Alternatively, vehicle-emanated, induced-mirror fields in a conducting (aluminum sheet) road bed could reduce ACC but at substantial energy costs. Ultra-speed tube flight will demand fast-acting, high-precision sensors and computerized magnetic shimming. This same control system can maintain a magnetic 'guide way' invariant in inertial space with inertial detectors imbedded in tube structures to sense and correct for earth tremors. Ultra-speed tube flight can complete with aircraft for transit time and can provide even greater passenger convenience by single-model connections with local subways and feeder lines. Although cargo transport generally will not need to be performed at ultra speeds, such speeds may well be desirable for high throughput to optimize channel costs. Thus, a large and expensive pipeline might be replaced with small EM-driven pallets at high speeds.

  4. Essays on information disclosure and the environment

    NASA Astrophysics Data System (ADS)

    Mathai, Koshy

    The essays in this dissertation study information disclosure and environmental policy. The first chapter challenges the longstanding result that firms will, in general, voluntarily disclose information about product quality, in light of the unrealism of the assumption, common to much of the literature, that consumers are identical. When this assumption is relaxed, an efficiency-enhancing role may emerge for disclosure regulation, insofar as it can improve information provision and thus help protect consumers with "moderately atypical" preferences. The paper also endogenizes firms's choice of quality and suggests that disclosure regulation may also raise welfare indirectly, by inducing firms to improve product quality. The second chapter explores the significance of policy-induced technological change (ITC) for the design of carbon-abatement policies. The paper considers both R&D-based and learning-by-doing-based knowledge accumulation, examining each specification under both a cost-effectiveness and a benefit-cost policy criterion. We show analytically that the presence of ITC generally implies a lower profile of optimal carbon taxes, a shifting of abatement effort into the future (in the R&D scenarios), and an increase in the scale of abatement (in the benefit-cost scenarios). Numerical simulations indicate that the impact of ITC on abatement timing is very slight, but the effects on costs, optimal carbon taxes, and cumulative abatement can be large. The third chapter uses a World Bank dataset on Chinese state-owned enterprises to estimate price elasticities of industrial coal demand. A simple coal-demand equation is estimated in many forms, and significant price sensitivity is almost always found: the own-price elasticity is estimated to be roughly -0.5. A cost-function/share-equation system is also estimated, and although the function is frequently ill-behaved, indicating that firms may not be minimizing costs, the elasticity estimates again are large and significant. These findings indicate that, even though China's is not a pure market economy, coal taxation can effectively be used to reduce reliance on fossil fuels and improve environmental quality. We calculate that a thirty-percent tax on industrial coal use could reduce consumption by nearly one hundred million tons annually.

  5. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE PAGES

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    2018-01-28

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  6. Synthesizing epidemiological and economic optima for control of immunizing infections.

    PubMed

    Klepac, Petra; Laxminarayan, Ramanan; Grenfell, Bryan T

    2011-08-23

    Epidemic theory predicts that the vaccination threshold required to interrupt local transmission of an immunizing infection like measles depends only on the basic reproductive number and hence transmission rates. When the search for optimal strategies is expanded to incorporate economic constraints, the optimum for disease control in a single population is determined by relative costs of infection and control, rather than transmission rates. Adding a spatial dimension, which precludes local elimination unless it can be achieved globally, can reduce or increase optimal vaccination levels depending on the balance of costs and benefits. For weakly coupled populations, local optimal strategies agree with the global cost-effective strategy; however, asymmetries in costs can lead to divergent control optima in more strongly coupled systems--in particular, strong regional differences in costs of vaccination can preclude local elimination even when elimination is locally optimal. Under certain conditions, it is locally optimal to share vaccination resources with other populations.

  7. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  8. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    NASA Astrophysics Data System (ADS)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  9. Dopaminergic Balance between Reward Maximization and Policy Complexity

    PubMed Central

    Parush, Naama; Tishby, Naftali; Bergman, Hagai

    2011-01-01

    Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor). Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost) and maximizing the expected future reward (gain). We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative) reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the tradeoff between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems. PMID:21603228

  10. Environmental statistics and optimal regulation.

    PubMed

    Sivak, David A; Thomson, Matt

    2014-09-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies--such as constitutive expression or graded response--for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  11. Optimal control of greenhouse gas emissions and system cost for integrated municipal solid waste management with considering a hierarchical structure.

    PubMed

    Li, Jing; He, Li; Fan, Xing; Chen, Yizhong; Lu, Hongwei

    2017-08-01

    This study presents a synergic optimization of control for greenhouse gas (GHG) emissions and system cost in integrated municipal solid waste (MSW) management on a basis of bi-level programming. The bi-level programming is formulated by integrating minimizations of GHG emissions at the leader level and system cost at the follower level into a general MSW framework. Different from traditional single- or multi-objective approaches, the proposed bi-level programming is capable of not only addressing the tradeoffs but also dealing with the leader-follower relationship between different decision makers, who have dissimilar perspectives interests. GHG emission control is placed at the leader level could emphasize the significant environmental concern in MSW management. A bi-level decision-making process based on satisfactory degree is then suitable for solving highly nonlinear problems with computationally effectiveness. The capabilities and effectiveness of the proposed bi-level programming are illustrated by an application of a MSW management problem in Canada. Results show that the obtained optimal management strategy can bring considerable revenues, approximately from 76 to 97 million dollars. Considering control of GHG emissions, it would give priority to the development of the recycling facility throughout the whole period, especially in latter periods. In terms of capacity, the existing landfill is enough in the future 30 years without development of new landfills, while expansion to the composting and recycling facilities should be paid more attention.

  12. Development of cost-effective surfactant flooding technology. Quarterly report, January 1, 1994--March 31, 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Sepehrnoori, K.

    1994-09-01

    The objective of this research is to develop cost-effective surfactant flooding technology by using surfactant simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics, process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. The goal of Task 2 is to understand and generalize themore » impact of both process and reservoir characteristics on the optimal design of surfactant flooding. We have studied the effect of process parameters such as salinity gradient, surfactant adsorption, surfactant concentration, surfactant slug size, pH, polymer concentration and well constraints on surfactant floods. In this report, we show three dimensional field scale simulation results to illustrate the impact of one important design parameter, the salinity gradient. Although the use of a salinity gradient to improve the efficiency and robustness of surfactant flooding has been studied and applied for many years, this is the first time that we have evaluated it using stochastic simulations rather than simulations using the traditional layered reservoir description. The surfactant flooding simulations were performed using The University of Texas chemical flooding simulator called UTCHEM.« less

  13. The cost-effectiveness of screening for oral cancer in primary care.

    PubMed

    Speight, P M; Palmer, S; Moles, D R; Downer, M C; Smith, D H; Henriksson, M; Augustovski, F

    2006-04-01

    To use a decision-analytic model to determine the incremental costs and outcomes of alternative oral cancer screening programmes conducted in a primary care environment. The cost-effectiveness of oral cancer screening programmes in a number of primary care environments was simulated using a decision analysis model. Primary data on actual resource use and costs were collected by case note review in two hospitals. Additional data needed to inform the model were obtained from published costs, from systematic reviews and by expert opinion using the Trial Roulette approach. The value of future research was determined using expected value of perfect information (EVPI) for the decision to screen and for each of the model inputs. Hypothetical screening programmes conducted in a number of primary care settings. Eight strategies were compared: (A) no screen; (B) invitational screen--general medical practice; (C) invitational screen--general dental practice; (D) opportunistic screen--general medical practice; (E) opportunistic screen--general dental practice; (F) opportunistic high-risk screen--general medical practice; (G) opportunistic high-risk screen--general dental practice; and (H) invitational screen--specialist. A hypothetical population over the age of 40 years was studied. The main measures were mean lifetime costs and quality-adjusted life-years (QALYs) of each alternative screening scenario and incremental cost-effectiveness ratios (ICERs) to determine the additional costs and benefits of each strategy over another. No screening (strategy A) was always the cheapest option. Strategies B, C, E and H were never cost-effective and were ruled out by dominance or extended dominance. Of the remaining strategies, the ICER for the whole population (age 49-79 years) ranged from pound 15,790 to pound 25,961 per QALY. Modelling a 20% reduction in disease progression always gave the lowest ICERs. Cost-effectiveness acceptability curves showed that there is considerable uncertainty in the optimal decision identified by the ICER, depending on both the maximum amount that the NHS may be prepared to pay and the impact that treatment has on the annual malignancy transformation rate. Overall, however, high-risk opportunistic screening by a general dental or medical practitioner (strategies F and G) may be cost-effective. EVPIs were high for all parameters with population values ranging from pound 8 million to pound 462 million. However, the values were significantly higher in males than females but also varied depending on malignant transformation rate, effects of treatment and willingness to pay. Partial EVPIs showed the highest values for malignant transformation rate, disease progression, self-referral and costs of cancer treatment. Opportunistic high-risk screening, particularly in general dental practice, may be cost-effective. This screening may more effectively be targeted to younger age groups, particularly 40-60 year olds. However, there is considerable uncertainty in the parameters used in the model, particularly malignant transformation rate, disease progression, patterns of self-referral and costs. Further study is needed on malignant transformation rates of oral potentially malignant lesions and to determine the outcome of treatment of oral potentially malignant lesions. Evidence has been published to suggest that intervention has no greater benefit than 'watch and wait'. Hence a properly planned randomised controlled trial may be justified. Research is also needed into the rates of progression of oral cancer and on referral pathways from primary to secondary care and their effects on delay and stage of presentation.

  14. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  15. New catalysts for coal liquefaction and new nanocrystalline catalysts synthesis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linehan, J.C.; Matson, D.W.; Darab, J.G.

    1994-09-01

    The use of coal as a source of transportation fuel is currently economically unfavorable due to an abundant world petroleum supply and the relatively high cost of coal liquefaction. Consequently, a reduction in the cost of coal liquefaction, for example by using less and/or less costly catalysts or lower liquefaction temperatures, must be accomplished if coal is to play an significant role as a source of liquid feedstock for the petrochemical industry. The authors and others have investigated the applicability of using inexpensive iron-based catalysts in place of more costly and environmentally hazardous metal catalysts for direct coal liquefaction. Iron-basedmore » catalysts can be effective in liquefying coal and in promoting carbon-carbon bond cleavage in model compounds. The authors have been involved in an ongoing effort to develop and optimize iron-based powders for use in coal liquefaction and related petrochemical applications. Research efforts in this area have been directed at three general areas. The authors have explored ways to optimize the effectiveness of catalyst precursor species through use of nanocrystalline materials and/or finely divided powders. In this effort, the authors have developed two new nanophase material production techniques, Modified Reverse Micelle (MRM) and the Rapid Thermal Decomposition of precursors in Solution (RTDS). A second effort has been aimed at optimizing the effectiveness of catalysts by variations in other factors. To this, the authors have investigated the effect that the crystalline phase has on the capacity of iron-based oxide and oxyhydroxide powders to be effectively converted to an active catalyst phase under liquefaction conditions. And finally, the authors have developed methods to produce active catalyst precursor powders in quantities sufficient for pilot-scale testing. Major results in these three areas are summarized.« less

  16. Optimizing resource allocation for breast cancer prevention and care among Hong Kong Chinese women.

    PubMed

    Wong, Irene O L; Tsang, Janice W H; Cowling, Benjamin J; Leung, Gabriel M

    2012-09-15

    Recommendations about funding of interventions through the full spectrum of the disease often have been made in isolation. The authors of this report optimized budgetary allocations by comparing cost-effectiveness data for different preventive and management strategies throughout the disease course for breast cancer in Hong Kong (HK) Chinese women. Nesting a state-transition Markov model within a generalized cost-effectiveness analytic framework, costs and quality-adjusted life-years (QALYs) were compared to estimate average cost-effectiveness ratios for the following interventions at the population level: biennial mass mammography (ages 40-69 years or ages 40-79 years), reduced waiting time for postoperative radiotherapy (by 15% or by 25%), adjuvant endocrine therapy (either upfront aromatase inhibitor [AI] therapy or sequentially with tamoxifen followed by AI) in postmenopausal women with estrogen receptor-positive disease, targeted immunotherapy in those with tumors that over express human epidermal growth factor receptor 2, and enhanced palliative services (either at home or as an inpatient). Usual care for eligible patients in the public sector was the comparator. In descending order, the optimal allocation of additional resources for breast cancer would be the following: a 25% reduction in waiting time for postoperative radiotherapy (in US dollars: $5000 per QALY); enhanced, home-based palliative care ($7105 per QALY); adjuvant, sequential endocrine therapy ($17,963 per QALY); targeted immunotherapy ($62,092 per QALY); and mass mammography screening of women ages 40 to 69 years ($72,576 per QALY). Given the lower disease risk and different age profiles of patients in HK Chinese, among other newly emergent and emerging economies with similar transitioning epidemiologic profiles, the current findings provided direct evidence to support policy decisions that may be dissimilar to current Western practice. Copyright © 2012 American Cancer Society.

  17. Reciprocating and Screw Compressor semi-empirical models for establishing minimum energy performance standards

    NASA Astrophysics Data System (ADS)

    Javed, Hassan; Armstrong, Peter

    2015-08-01

    The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.

  18. Theoretical study of network design methodologies for the aerial relay system. [energy consumption and air traffic control

    NASA Technical Reports Server (NTRS)

    Rivera, J. M.; Simpson, R. W.

    1980-01-01

    The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.

  19. Generalized Newton Method for Energy Formulation in Image Processing

    DTIC Science & Technology

    2008-04-01

    A. Brook, N. Sochen, and N. Kiryati. Deblurring of color images corrupted by impulsive noise . IEEE Transactions on Image Processing, 16(4):1101–1111...tive functionals: variational image deblurring and geodesic active contours for image segmentation. We show that in addition to the fast convergence...inner product, active contours, deblurring . AMS subject classifications. 35A15, 65K10, 90C53 1. Introduction. Optimization of a cost functional is a

  20. Cost-effectiveness analysis of the optimal threshold of an automated immunochemical test for colorectal cancer screening: performances of immunochemical colorectal cancer screening.

    PubMed

    Berchi, Célia; Guittet, Lydia; Bouvier, Véronique; Launoy, Guy

    2010-01-01

    Most industrialized countries, including France, have undertaken to generalize colorectal cancer screening using guaiac fecal occult blood tests (G-FOBT). However, recent researches demonstrate that immunochemical fecal occult blood tests (I-FOBT) are more effective than G-FOBT. Moreover, new generation I-FOBT benefits from a quantitative reading technique allowing the positivity threshold to be chosen, hence offering the best balance between effectiveness and cost. We aimed at comparing the cost and the clinical performance of one round of screening using I-FOBT at different positivity thresholds to those obtained with G-FOBT to determine the optimal cut-off for I-FOBT. Data were derived from an experiment conducted from June 2004 to December 2005 in Calvados (France) where 20,322 inhabitants aged 50-74 years performed both I-FOBT and G-FOBT. Clinical performance was assessed by the number of advanced tumors screened, including large adenomas and cancers. Costs were assessed by the French Social Security Board and included only direct costs. Screening using I-FOBT resulted in better health outcomes and lower costs than screening using G-FOBT for thresholds comprised between 75 and 93 ng/ml. I-FOBT at 55 ng/ml also offers a satisfactory alternative to G-FOBT, because it is 1.8-fold more effective than G-FOBT, without increasing the number of unnecessary colonoscopies, and at an extra cost of 2,519 euros per advanced tumor screened. The use of an automated I-FOBT at 75 ng/ml would guarantee more efficient screening than currently used G-FOBT. Health authorities in industrialized countries should consider the replacement of G-FOBT by an automated I-FOBT test in the near future.

  1. Cost-Effectiveness of IDegLira Versus Insulin Intensification Regimens for the Treatment of Adults with Type 2 Diabetes in the Czech Republic.

    PubMed

    Kvapil, Milan; Prázný, Martin; Holik, Pavel; Rychna, Karel; Hunt, Barnaby

    2017-12-01

    The aim of this study was to evaluate the long-term cost-effectiveness of the insulin degludec/liraglutide combination (IDegLira) versus basal insulin intensification strategies for patients with type 2 diabetes mellitus (T2DM) not optimally controlled on basal insulin in the Czech Republic. Cost-effectiveness was evaluated using the QuintilesIMS Health CORE Diabetes model, an interactive internet-based model that simulates clinical outcomes and costs for cohorts of patients with diabetes. The analysis was conducted from the perspective of the Czech Republic public payer. Sensitivity analyses were conducted to explore the sensitivity of the model to plausible variations in key parameters. The use of IDegLira was associated with an improvement in the quality-adjusted life expectancy of 0.31 quality-adjusted life-years (QALYs), at an additional cost of Czech Koruna (CZK) 107,829 over a patient's lifetime compared with basal-bolus therapy, generating an incremental cost-effectiveness ratio (ICER) of CZK 345,052 per QALY gained. In a scenario analysis, IDegLira was associated with an ICER of CZK 693,763 per QALY gained compared to basal insulin + glucagon-like peptide-1 receptor agonist (GLP-1 RA). The ICERs are below the generally accepted willingness-to-pay threshold (CZK 1,100,000/QALY gained at the time of this analysis). Results from this evaluation suggest that IDegLira is a cost-effective treatment option compared with basal-bolus therapy and basal insulin + GLP-1 RA for patients with T2DM in the Czech Republic whose diabetes is not optimally controlled with basal insulin. Novo Nordisk.

  2. What is a hospital bed day worth? A contingent valuation study of hospital Chief Executive Officers.

    PubMed

    Page, Katie; Barnett, Adrain G; Graves, Nicholas

    2017-02-14

    Decreasing hospital length of stay, and so freeing up hospital beds, represents an important cost saving which is often used in economic evaluations. The savings need to be accurately quantified in order to make optimal health care resource allocation decisions. Traditionally the accounting cost of a bed is used. We argue instead that the economic cost of a bed day is the better value for making resource decisions, and we describe our valuation method and estimations for costing this important resource. We performed a contingent valuation using 37 Australian Chief Executive Officers' (CEOs) willingness to pay (WTP) to release bed days in their hospitals, both generally and using specific cases. We provide a succinct thematic analysis from qualitative interviews post survey completion, which provide insight into the decision making process. On average CEOs are willing to pay a marginal rate of $216 for a ward bed day and $436 for an Intensive Care Unit (ICU) bed day, with estimates of uncertainty being greater for ICU beds. These estimates are significantly lower (four times for ward beds and seven times for ICU beds) than the traditional accounting costs often used. Key themes to emerge from the interviews include the importance of national funding and targets, and their associated incentive structures, as well as the aversion to discuss bed days as an economic resource. This study highlights the importance for valuing bed days as an economic resource to inform cost effectiveness models and thus improve hospital decision making and resource allocation. Significantly under or over valuing the resource is very likely to result in sub-optimal decision making. We discuss the importance of recognising the opportunity costs of this resource and highlight areas for future research.

  3. The complex interface between economy and healthcare: An introductory overview for clinicians.

    PubMed

    Ottolini, Federica Liliana; Buggio, Laura; Somigliana, Edgardo; Vercellini, Paolo

    2016-12-01

    In a period of generalized economic crisis, it seems particularly appropriate to try to manage a continuing growing sector such as healthcare in the best possible way. The crucial aim of optimization of available healthcare resources is obtaining the maximum possible benefit with the minimum expenditure. This has important social implications, whether individual citizens or tax-funded national health services eventually have to pay the bill. The keyword here is efficiency, which means either, maximizing the benefit from a fixed sum of money, or minimizing the resources required for a defined benefit. In order to achieve these objectives, economic evaluation is a helpful tool. Five different types of economic evaluation exist in the health-care field: cost-minimization, cost-benefit, cost-consequences, cost-effectiveness and cost-utility analysis. The objective of this narrative review is to provide an overview of the principal methods used for economic evaluation in healthcare. Economic evaluation represents a starting point for the allocation of resources, the decision of the valuable investments and the division of budgets across different health programs. Moreover, economic evaluation allows the comparison of different procedures in terms of quality of life and life expectancy, bearing in mind that cost-effectiveness is only one of multiple facets in the decision making-process. Economic evaluation is important to critically evaluate clinical interventions and ensure that we are implementing the most cost-effective management protocols. Clinicians are called to fulfill the complex task of optimizing the use of resources, and, at the same time, improving the quality of healthcare assistance. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  4. Optimization of Automobile Crush Characteristics: Technical Report

    DOT National Transportation Integrated Search

    1975-10-01

    A methodology is developed for the evaluation and optimization of societal costs of two-vehicle automobile collisions. Costs considered in a Figure of Merit include costs of injury/mortality, occupant compartment penetration, collision damage repairs...

  5. Automated optimization techniques for aircraft synthesis

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.

  6. The costs and benefits of positive illusions

    PubMed Central

    Makridakis, Spyros; Moleskis, Andreas

    2015-01-01

    Positive illusions are associated with unrealistic optimism about the future and an inflated assessment of one’s abilities. They are prevalent in normal life and are considered essential for maintaining a healthy mental state, although, there are disagreements to the extent to which people demonstrate these positive illusions and whether they are beneficial or not. But whatever the situation, it is hard to dismiss their existence and their positive and/or negative influence on human behavior and decision making in general. Prominent among illusions is that of control, that is “the tendency for people to overestimate their ability to control events.” This paper describes positive illusions, their potential benefits but also quantifies their costs in five specific fields (gambling, stock and other markets, new firms and startups, preventive medicine and wars). It is organized into three parts. First the psychological reasons giving rise to positive illusions are described and their likely harm and benefits stated. Second, their negative consequences are presented and their costs are quantified in five areas seriously affected with emphasis to those related to the illusion of control that seems to dominate those of unrealistic optimism. The costs involved are huge and serious efforts must be undertaken to understand their enormity and steps taken to avoid them in the future. Finally, there is a concluding section where the challenges related to positive illusions are noted and directions for future research are presented. PMID:26175698

  7. The costs and benefits of positive illusions.

    PubMed

    Makridakis, Spyros; Moleskis, Andreas

    2015-01-01

    Positive illusions are associated with unrealistic optimism about the future and an inflated assessment of one's abilities. They are prevalent in normal life and are considered essential for maintaining a healthy mental state, although, there are disagreements to the extent to which people demonstrate these positive illusions and whether they are beneficial or not. But whatever the situation, it is hard to dismiss their existence and their positive and/or negative influence on human behavior and decision making in general. Prominent among illusions is that of control, that is "the tendency for people to overestimate their ability to control events." This paper describes positive illusions, their potential benefits but also quantifies their costs in five specific fields (gambling, stock and other markets, new firms and startups, preventive medicine and wars). It is organized into three parts. First the psychological reasons giving rise to positive illusions are described and their likely harm and benefits stated. Second, their negative consequences are presented and their costs are quantified in five areas seriously affected with emphasis to those related to the illusion of control that seems to dominate those of unrealistic optimism. The costs involved are huge and serious efforts must be undertaken to understand their enormity and steps taken to avoid them in the future. Finally, there is a concluding section where the challenges related to positive illusions are noted and directions for future research are presented.

  8. Two-step optimization of pressure and recovery of reverse osmosis desalination process.

    PubMed

    Liang, Shuang; Liu, Cui; Song, Lianfa

    2009-05-01

    Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed.

  9. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances.

    PubMed

    Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang

    2018-06-01

    Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

  10. Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System

    NASA Astrophysics Data System (ADS)

    Huang, Long; Feng, Xiao; Chu, Khim H.

    2010-11-01

    Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.

  11. Controlling for endogeneity in attributable costs of vancomycin-resistant enterococci from a Canadian hospital.

    PubMed

    Lloyd-Smith, Patrick

    2017-12-01

    Decisions regarding the optimal provision of infection prevention and control resources depend on accurate estimates of the attributable costs of health care-associated infections. This is challenging given the skewed nature of health care cost data and the endogeneity of health care-associated infections. The objective of this study is to determine the hospital costs attributable to vancomycin-resistant enterococci (VRE) while accounting for endogeneity. This study builds on an attributable cost model conducted by a retrospective cohort study including 1,292 patients admitted to an urban hospital in Vancouver, Canada. Attributable hospital costs were estimated with multivariate generalized linear models (GLMs). To account for endogeneity, a control function approach was used. The analysis sample included 217 patients with health care-associated VRE. In the standard GLM, the costs attributable to VRE are $17,949 (SEM, $2,993). However, accounting for endogeneity, the attributable costs were estimated to range from $14,706 (SEM, $7,612) to $42,101 (SEM, $15,533). Across all model specifications, attributable costs are 76% higher on average when controlling for endogeneity. VRE was independently associated with increased hospital costs, and controlling for endogeneity lead to higher attributable cost estimates. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  12. Renewable Energy Resources Portfolio Optimization in the Presence of Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behboodi, Sahand; Chassin, David P.; Crawford, Curran

    In this paper we introduce a simple cost model of renewable integration and demand response that can be used to determine the optimal mix of generation and demand response resources. The model includes production cost, demand elasticity, uncertainty costs, capacity expansion costs, retirement and mothballing costs, and wind variability impacts to determine the hourly cost and revenue of electricity delivery. The model is tested on the 2024 planning case for British Columbia and we find that cost is minimized with about 31% renewable generation. We also find that demand responsive does not have a significant impact on cost at themore » hourly level. The results suggest that the optimal level of renewable resource is not sensitive to a carbon tax or demand elasticity, but it is highly sensitive to the renewable resource installation cost.« less

  13. Application of a New Integrated Decision Support Tool (i-DST) for Urban Water Infrastructure: Analyzing Water Quality Compliance Pathways for Three Los Angeles Watersheds

    NASA Astrophysics Data System (ADS)

    Gallo, E. M.; Hogue, T. S.; Bell, C. D.; Spahr, K.; McCray, J. E.

    2017-12-01

    The water quality of receiving streams and waterbodies in urban watersheds are increasingly polluted from stormwater runoff. The implementation of Green Infrastructure (GI), which includes Low Impact Developments (LIDs) and Best Management Practices (BMPs), within a watershed aim to mitigate the effects of urbanization by reducing pollutant loads, runoff volume, and storm peak flow. Stormwater modeling is generally used to assess the impact of GIs implemented within a watershed. These modeling tools are useful for determining the optimal suite of GIs to maximize pollutant load reduction and minimize cost. However, stormwater management for most resource managers and communities also includes the implementation of grey and hybrid stormwater infrastructure. An integrated decision support tool, called i-DST, that allows for the optimization and comprehensive life-cycle cost assessment of grey, green, and hybrid stormwater infrastructure, is currently being developed. The i-DST tool will evaluate optimal stormwater runoff management by taking into account the diverse economic, environmental, and societal needs associated with watersheds across the United States. Three watersheds from southern California will act as a test site and assist in the development and initial application of the i-DST tool. The Ballona Creek, Dominguez Channel, and Los Angeles River Watersheds are located in highly urbanized Los Angeles County. The water quality of the river channels flowing through each are impaired by heavy metals, including copper, lead, and zinc. However, despite being adjacent to one another within the same county, modeling results, using EPA System for Urban Stormwater Treatment and Analysis INtegration (SUSTAIN), found that the optimal path to compliance in each watershed differs significantly. The differences include varied costs, suites of BMPs, and ancillary benefits. This research analyzes how the economic, physical, and hydrological differences between the three watersheds shape the optimal plan for stormwater management.

  14. Dynamic optimization case studies in DYNOPT tool

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    Dynamic programming is typically applied to optimization problems. As the analytical solutions are generally very difficult, chosen software tools are used widely. These software packages are often third-party products bound for standard simulation software tools on the market. As typical examples of such tools, TOMLAB and DYNOPT could be effectively applied for solution of problems of dynamic programming. DYNOPT will be presented in this paper due to its licensing policy (free product under GPL) and simplicity of use. DYNOPT is a set of MATLAB functions for determination of optimal control trajectory by given description of the process, the cost to be minimized, subject to equality and inequality constraints, using orthogonal collocation on finite elements method. The actual optimal control problem is solved by complete parameterization both the control and the state profile vector. It is assumed, that the optimized dynamic model may be described by a set of ordinary differential equations (ODEs) or differential-algebraic equations (DAEs). This collection of functions extends the capability of the MATLAB Optimization Tool-box. The paper will introduce use of DYNOPT in the field of dynamic optimization problems by means of case studies regarding chosen laboratory physical educational models.

  15. Cost optimization of reinforced concrete cantilever retaining walls under seismic loading using a biogeography-based optimization algorithm with Levy flights

    NASA Astrophysics Data System (ADS)

    Aydogdu, Ibrahim

    2017-03-01

    In this article, a new version of a biogeography-based optimization algorithm with Levy flight distribution (LFBBO) is introduced and used for the optimum design of reinforced concrete cantilever retaining walls under seismic loading. The cost of the wall is taken as an objective function, which is minimized under the constraints implemented by the American Concrete Institute (ACI 318-05) design code and geometric limitations. The influence of peak ground acceleration (PGA) on optimal cost is also investigated. The solution of the problem is attained by the LFBBO algorithm, which is developed by adding Levy flight distribution to the mutation part of the biogeography-based optimization (BBO) algorithm. Five design examples, of which two are used in literature studies, are optimized in the study. The results are compared to test the performance of the LFBBO and BBO algorithms, to determine the influence of the seismic load and PGA on the optimal cost of the wall.

  16. Heliostat cost optimization study

    NASA Astrophysics Data System (ADS)

    von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus

    2016-05-01

    This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.

  17. Photovoltaic design optimization for terrestrial applications

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1978-01-01

    As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.

  18. Optimal synthesis and design of the number of cycles in the leaching process for surimi production.

    PubMed

    Reinheimer, M Agustina; Scenna, Nicolás J; Mussati, Sergio F

    2016-12-01

    Water consumption required during the leaching stage in the surimi manufacturing process strongly depends on the design and the number and size of stages connected in series for the soluble protein extraction target, and it is considered as the main contributor to the operating costs. Therefore, the optimal synthesis and design of the leaching stage is essential to minimize the total annual cost. In this study, a mathematical optimization model for the optimal design of the leaching operation is presented. Precisely, a detailed Mixed Integer Nonlinear Programming (MINLP) model including operating and geometric constraints was developed based on our previous optimization model (NLP model). Aspects about quality, water consumption and main operating parameters were considered. The minimization of total annual costs, which considered a trade-off between investment and operating costs, led to an optimal solution with lesser number of stages (2 instead of 3 stages) and higher volumes of the leaching tanks comparing with previous results. An analysis was performed in order to investigate how the optimal solution was influenced by the variations of the unitary cost of fresh water, waste treatment and capital investment.

  19. Optimisation by hierarchical search

    NASA Astrophysics Data System (ADS)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  20. Nanoparticle risk management and cost evaluation: a general framework

    NASA Astrophysics Data System (ADS)

    Fleury, Dominique; Bomfim, João A. S.; Metz, Sébastien; Bouillard, Jacques X.; Brignon, Jean-Marc

    2011-07-01

    Industrial production of nano-objects has been growing fast during the last decade and a wide range of products containing nanoparticles (NPs) is proposed to the public in various markets (automotive, electronics, textiles...). The issues encountered in monitoring the presence of nano-objects in any media cause a major difficulty for controlling the risk associated to the production stage. It is therefore very difficult to assess the efficiency of prevention and mitigation solutions, which potentially leads to overestimate the level of the protection barriers that are recommended. The extra costs in adding nano-objects to the process, especially that of nanosafety, must be estimated and optimized to ensure the competitiveness of the future production lines and associated products. The risk management and cost evaluation methods presented herein have been designed for application in a pilot production line of injection-moulded nanocomposites.

  1. Wing attachment position of fruit fly minimizes flight cost

    NASA Astrophysics Data System (ADS)

    Noest, Robert; Wang, Jane

    Flight is energetically costly which means insects need to find ways to reduce their energy expenditure during sustained flight. Previous work has shown that insect muscles can recover some of the energy used for producing flapping motion. Moreover the form of flapping motions are efficient for generating the required force to balance the weight. In this talk, we show that one of the morphological parameters, the wing attachment point on a fly, is suitably located to further reduce the cost for flight, while allowing the fly to be close to stable. We investigate why this is the case and attempt to find a general rule for the optimal location of the wing hinge. Our analysis is based on computations of flapping free flight together with the Floquet stability analysis of periodic flight for descending, hovering and ascending cases.

  2. Space Infrared Telescope Facility (SIRTF) - Operations concept. [decreasing development and operations cost

    NASA Technical Reports Server (NTRS)

    Miller, Richard B.

    1992-01-01

    The development and operations costs of the Space IR Telescope Facility (SIRTF) are discussed in the light of minimizing total outlays and optimizing efficiency. The development phase cannot extend into the post-launch segment which is planned to only support system verification and calibration followed by operations with a 70-percent efficiency goal. The importance of reducing the ground-support staff is demonstrated, and the value of the highly sensitive observations to the general astronomical community is described. The Failure Protection Algorithm for the SIRTF is designed for the 5-yr lifetime and the continuous venting of cryogen, and a science driven ground/operations system is described. Attention is given to balancing cost and performance, prototyping during the development phase, incremental development, the utilization of standards, and the integration of ground system/operations with flight system integration and test.

  3. Design the Cost Approach in Trade-Off's for Structural Components, Illustrated on the Baseline Selection of the Engine Thrust Frame of Ariane 5 ESC-B

    NASA Astrophysics Data System (ADS)

    Appolloni, L.; Juhls, A.; Rieck, U.

    2002-01-01

    Designing for value is one of the very actual upcoming methods for design optimization, which broke into the domain of aerospace engineering in the late 90's. In the frame of designing for value two main design philosophies exist: Design For Cost and Design To Cost. Design To Cost is the iterative redesign of a project until the content of the project meets a given budget. Designing For Cost is the conscious use of engineering process technology to reduce life cycle cost while satisfying, and hopefully exceeding, customer demands. The key to understanding cost, and hence to reducing cost, is the ability to measure cost accurately and to allocate it appropriately to products. Only then can intelligent decisions be made. Therefore the necessity of new methods as "Design For Value" or "Design For Competitiveness", set up with a generally multidisciplinary approach to find an optimized technical solution driven by many parameters, depending on the mission scenario and the customer/market needs. Very often three, but not more than five parametric drivers are sufficient. The more variable exist, the higher is in fact the risk to find just a sub-optimized local and not the global optimum, and the less robust is the found solution against change of input parameters. When the main parameters for optimization have been identified, the system engineer has to communicate them to all design engineers, who shall take care of these assessment variables during the entire design and decision process. The design process which has taken to the definition of the feasible structural concepts for the Engine Thrust Frame of the Ariane 5 Upper Cryogenic Stage ESC-B follows these most actual design philosophy methodologies, and combines a design for cost approach, to a design to cost optimization loop. Ariane 5 is the first member of a family of heavy-lift launchers. It aims to evolve into a family of launchers that responds to the space transportation challenges of the 21st century. New upper stages, along with modifications to the main cryogenic stage and solid boosters, will increase performance and meet demands of a changing market. A two-steps approach was decided for future developments of the launcher upper stage, in order to increase the payload lift capability of Ariane 5. The first step ESC-A is scheduled for first launch in 2002. As later step ESC-B shall grow up to 12 tons in GTO orbit, with multiple restart capability, i.e. re-ignitable engine. Ariane 5 ESC-B first flight is targeted for 2006. It will be loaded with 28 metric tons of liquid oxygen and liquid hydrogen and powered by a new expander cycle engine "Vinci". The Vinci engine will be connected to the tanks of the ESC-B stage via the structure named from the designers ETF, or Engine Thrust Frame. In order to develop a design concept for the ETF component a trade off was performed, based on the most modern system engineering methodologies. This paper will describe the basis of the system engineering approach in the design to cost process, and illustrate such approach as it has been applied during the trade off for the baseline selection of the Engine Thrust Frame of Ariane 5 ESC-B.

  4. Time-driven activity-based costing to identify opportunities for cost reduction in pediatric appendectomy.

    PubMed

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2016-12-01

    As reimbursement programs shift to value-based payment models emphasizing quality and efficient healthcare delivery, there exists a need to better understand process management to unearth true costs of patient care. We sought to identify cost-reduction opportunities in simple appendicitis management by applying a time-driven activity-based costing (TDABC) methodology to this high-volume surgical condition. Process maps were created using medical record time stamps. Labor capacity cost rates were calculated using national median physician salaries, weighted nurse-patient ratios, and hospital cost data. Consumable costs for supplies, pharmacy, laboratory, and food were derived from the hospital general ledger. Time-driven activity-based costing resulted in precise per-minute calculation of personnel costs. Highest costs were in the operating room ($747.07), hospital floor ($388.20), and emergency department ($296.21). Major contributors to length of stay were emergency department evaluation (270min), operating room availability (395min), and post-operative monitoring (1128min). The TDABC model led to $1712.16 in personnel costs and $1041.23 in consumable costs for a total appendicitis cost of $2753.39. Inefficiencies in healthcare delivery can be identified through TDABC. Triage-based standing delegation orders, advanced practice providers, and same day discharge protocols are proposed cost-reducing interventions to optimize value-based care for simple appendicitis. II. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. An innovative time-cost-quality tradeoff modeling of building construction project based on resource allocation.

    PubMed

    Hu, Wenfa; He, Xinhua

    2014-01-01

    The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated.

  6. The cost of hybrid waste water systems: A systematic framework for specifying minimum cost-connection rates.

    PubMed

    Eggimann, Sven; Truffer, Bernhard; Maurer, Max

    2016-10-15

    To determine the optimal connection rate (CR) for regional waste water treatment is a challenge that has recently gained the attention of academia and professional circles throughout the world. We contribute to this debate by proposing a framework for a total cost assessment of sanitation infrastructures in a given region for the whole range of possible CRs. The total costs comprise the treatment and transportation costs of centralised and on-site waste water management systems relative to specific CRs. We can then identify optimal CRs that either deliver waste water services at the lowest overall regional cost, or alternatively, CRs that result from households freely choosing whether they want to connect or not. We apply the framework to a Swiss region, derive a typology for regional cost curves and discuss whether and by how much the empirically observed CRs differ from the two optimal ones. Both optimal CRs may be reached by introducing specific regulatory incentive structures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches

    NASA Astrophysics Data System (ADS)

    Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo

    This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.

  8. Cost considerations for long-term ecological monitoring

    USGS Publications Warehouse

    Caughlan, L.; Oakley, K.L.

    2001-01-01

    For an ecological monitoring program to be successful over the long-term, the perceived benefits of the information must justify the cost. Financial limitations will always restrict the scope of a monitoring program, hence the program’s focus must be carefully prioritized. Clearly identifying the costs and benefits of a program will assist in this prioritization process, but this is easier said than done. Frequently, the true costs of monitoring are not recognized and are, therefore, underestimated. Benefits are rarely evaluated, because they are difficult to quantify. The intent of this review is to assist the designers and managers of long-term ecological monitoring programs by providing a general framework for building and operating a cost-effective program. Previous considerations of monitoring costs have focused on sampling design optimization. We present cost considerations of monitoring in a broader context. We explore monitoring costs, including both budgetary costs, what dollars are spent on, and economic costs, which include opportunity costs. Often, the largest portion of a monitoring program budget is spent on data collection, and other, critical aspects of the program, such as scientific oversight, training, data management, quality assurance, and reporting, are neglected. Recognizing and budgeting for all program costs is therefore a key factor in a program’s longevity. The close relationship between statistical issues and cost is discussed, highlighting the importance of sampling design, replication and power, and comparing the costs of alternative designs through pilot studies and simulation modeling. A monitoring program development process that includes explicit checkpoints for considering costs is presented. The first checkpoint occurs during the setting of objectives and during sampling design optimization. The last checkpoint occurs once the basic shape of the program is known, and the costs and benefits, or alternatively the cost-effectiveness, of each program element can be evaluated. Moving into the implementation phase without careful evaluation of costs and benefits is risky because if costs are later found to exceed benefits, the program will fail. The costs of development, which can be quite high, will have been largely wasted. Realistic expectations of costs and benefits will help ensure that monitoring programs survive the early, turbulent stages of development and the challenges posed by fluctuating budgets during implementation.

  9. Multiple piece turbine engine airfoil with a structural spar

    DOEpatents

    Vance, Steven J [Orlando, FL

    2011-10-11

    A multiple piece turbine airfoil having an outer shell with an airfoil tip that is attached to a root with an internal structural spar is disclosed. The root may be formed from first and second sections that include an internal cavity configured to receive and secure the one or more components forming the generally elongated airfoil. The internal structural spar may be attached to an airfoil tip and place the generally elongated airfoil in compression. The configuration enables each component to be formed from different materials to reduce the cost of the materials and to optimize the choice of material for each component.

  10. Game Theory and Risk-Based Levee System Design

    NASA Astrophysics Data System (ADS)

    Hui, R.; Lund, J. R.; Madani, K.

    2014-12-01

    Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.

  11. Affordable CZT SPECT with dose-time minimization (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hugg, James W.; Harris, Brian W.; Radley, Ian

    2017-03-01

    PURPOSE Pixelated CdZnTe (CZT) detector arrays are used in molecular imaging applications that can enable precision medicine, including small-animal SPECT, cardiac SPECT, molecular breast imaging (MBI), and general purpose SPECT. The interplay of gamma camera, collimator, gantry motion, and image reconstruction determines image quality and dose-time-FOV tradeoffs. Both dose and exam time can be minimized without compromising diagnostic content. METHODS Integration of pixelated CZT detectors with advanced ASICs and readout electronics improves system performance. Because historically CZT was expensive, the first clinical applications were limited to small FOV. Radiation doses were initially high and exam times long. Advances have significantly improved efficiency of CZT-based molecular imaging systems and the cost has steadily declined. We have built a general purpose SPECT system using our 40 cm x 53 cm CZT gamma camera with 2 mm pixel pitch and characterized system performance. RESULTS Compared to NaI scintillator gamma cameras: intrinsic spatial resolution improved from 3.8 mm to 2.0 mm; energy resolution improved from 9.8% to <4 % at 140 keV; maximum count rate is <1.5 times higher; non-detection camera edges are reduced 3-fold. Scattered photons are greatly reduced in the photopeak energy window; image contrast is improved; and the optimal FOV is increased to the entire camera area. CONCLUSION Continual improvements in CZT detector arrays for molecular imaging, coupled with optimal collimator and image reconstruction, result in minimized dose and exam time. With CZT cost improving, affordable whole-body CZT general purpose SPECT is expected to enable precision medicine applications.

  12. An effective and optimal quality control approach for green energy manufacturing using design of experiments framework and evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Saavedra, Juan Alejandro

    Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.

  13. Hydro-economic optimization model for selecting least cost programs of measures at the river basin scale. Application to the implementation of the EU Water Framework Directive on the Orb river basin (France).

    NASA Astrophysics Data System (ADS)

    Girard, C.; Rinaudo, J. D.; Caballero, Y.; Pulido-Velazquez, M.

    2012-04-01

    This article presents a case study which illustrates how an integrated hydro-economic model can be applied to optimize a program of measures (PoM) at the river basin level. By allowing the integration of hydrological, environmental and economic aspects at a local scale, this model is indeed useful to assist water policy decision making processes. The model identifies the least cost PoM to satisfy the predicted 2030 urban and agricultural water demands while meeting the in-stream flow constraints. The PoM mainly consists of water saving and conservation measures at the different demands. It includes as well some measures mobilizing additional water resources coming from groundwater, inter-basin transfers and improvement in reservoir operating rules. The flow constraints are defined to ensure a good status of the surface water bodies, as defined by the EU Water Framework Directive (WFD). The case study is conducted in the Orb river basin, a coastal basin in Southern France. It faces a significant population growth, changes in agricultural patterns and limited water resources. It is classified at risk of not meeting the good status by 2015. Urban demand is calculated by type of water users at municipality level in 2006 and projected to 2030 with user specific scenarios. Agricultural water demand is estimated at irrigation district (canton) level in 2000 and projected to 2030 under three agricultural development scenarios. The total annual cost of each measure has been calculated taken into account operation and maintenance costs as well as investment cost. A first optimization model was developed using GAMS, General Algebraic Modeling System, applying Mixed Integer Linear Programming. The optimization is run to select the set of measures that minimizes the objective function, defined as the total cost of the applied measures, while meeting the demands and environmental constraints (minimum in-stream flows) for the 2030 time horizon. The first result is an optimized PoM on a drought year with a return period of five years, taken as a baseline scenario. A second step takes into account the impact of climate change on water demands and available resources. This allows decision makers to assess how the cost of the PoM evolves when the level of environmental constraints is increased or loosed, and so provides them a valuable input to understand the opportunity costs and trade-offs when defining environmental objectives for the long term, including also climate as a major factor of change. Finally, the model will be used on an extended hydrological time series to study costs and impacts of the PoM on the allocation of water resources. This will also allow the investigation of the uncertainties and the effect of risk aversion of decision makers and users on the system management, as well as the influence of the perfect foresight of deterministic optimization. ACKNOWLEDGEMENTS The study has been partially supported by the BRGM project Ouest-Hérault, the European Community 7th Framework Project GENESIS (n. 226536) on groundwater systems, and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (sub-projects CGL2009-13238-C02-01 and CGL2009-13238-C02-02).

  14. Optimization in optical systems revisited: Beyond genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gagnon, Denis; Dumont, Joey; Dubé, Louis

    2013-05-01

    Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).

  15. Economic analysis of transmission line engineering based on industrial engineering

    NASA Astrophysics Data System (ADS)

    Li, Yixuan

    2017-05-01

    The modern industrial engineering is applied to the technical analysis and cost analysis of power transmission and transformation engineering. It can effectively reduce the cost of investment. First, the power transmission project is economically analyzed. Based on the feasibility study of power transmission and transformation project investment, the proposal on the company system cost management is put forward through the economic analysis of the effect of the system. The cost management system is optimized. Then, through the cost analysis of power transmission and transformation project, the new situation caused by the cost of construction is found. It is of guiding significance to further improve the cost management of power transmission and transformation project. Finally, according to the present situation of current power transmission project cost management, concrete measures to reduce the cost of power transmission project are given from the two aspects of system optimization and technology optimization.

  16. Quality and cost improvement of healthcare via complementary measurement and diagnosis of patient general health outcome using electronic health record data: research rationale and design.

    PubMed

    Stusser, Rodolfo J; Dickey, Richard A

    2013-12-01

    In this evolving 'third era of health', one of the US Health Care Reform Act's goals is to effectively facilitate the primary care physician's ability to better diagnose and manage the health outcome of the outpatient. That goal must include research on the complementary quantitative-qualitative assessment and rating of the patient's health status. This paper proposes an overview of the rationale and design of a research program for a balanced measurement and diagnostic clinical decision support system (CDSS) of the changing general health status of the patient -including disease- using electronic health record (EHR) data. The rationale, objectives, health metric-diagnostic tools architecture, simulation-optimization, and clinical trials are outlined. Resources, time frames, costs, feasibility, healthcare benefits and data-integration of the project are delineated. The basis and components of the research program to achieve an automated-CDSS to complement physician's clinical judgment, calculating a mathematical 'health equation' from each patient's EHR database, assisting physician-patient collaboration to diagnose, and improve general health outcomes is described. Use of multiple dimensional index, ways of classification, and causal factors' assessments, to arrive at the EHR-based CDSS algorithm-software providing a general health level and state rating of the patient are proposed. Its application could provide a compass for the general practitioner's best choice and use of the myriad of healthcare educational and technological options available with lower costs for everyday clinical practice and research. It could advance the approaches and focus of the 'eras of diseases', to the promising 'era of health', in an integrated, general approach to 'health.'

  17. Optimal investment strategies and hedging of derivatives in the presence of transaction costs (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Muratore-Ginanneschi, Paolo

    2005-05-01

    Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.

  18. Behavioral responses in structured populations pave the way to group optimality.

    PubMed

    Akçay, Erol; Van Cleve, Jeremy

    2012-02-01

    An unresolved controversy regarding social behaviors is exemplified when natural selection might lead to behaviors that maximize fitness at the social-group level but are costly at the individual level. Except for the special case of groups of clones, we do not have a general understanding of how and when group-optimal behaviors evolve, especially when the behaviors in question are flexible. To address this question, we develop a general model that integrates behavioral plasticity in social interactions with the action of natural selection in structured populations. We find that group-optimal behaviors can evolve, even without clonal groups, if individuals exhibit appropriate behavioral responses to each other's actions. The evolution of such behavioral responses, in turn, is predicated on the nature of the proximate behavioral mechanisms. We model a particular class of proximate mechanisms, prosocial preferences, and find that such preferences evolve to sustain maximum group benefit under certain levels of relatedness and certain ecological conditions. Thus, our model demonstrates the fundamental interplay between behavioral responses and relatedness in determining the course of social evolution. We also highlight the crucial role of proximate mechanisms such as prosocial preferences in the evolution of behavioral responses and in facilitating evolutionary transitions in individuality.

  19. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  20. A Simple Label Switching Algorithm for Semisupervised Structural SVMs.

    PubMed

    Balamurugan, P; Shevade, Shirish; Sundararajan, S

    2015-10-01

    In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.

  1. Optimization of atmospheric transport models on HPC platforms

    NASA Astrophysics Data System (ADS)

    de la Cruz, Raúl; Folch, Arnau; Farré, Pau; Cabezas, Javier; Navarro, Nacho; Cela, José María

    2016-12-01

    The performance and scalability of atmospheric transport models on high performance computing environments is often far from optimal for multiple reasons including, for example, sequential input and output, synchronous communications, work unbalance, memory access latency or lack of task overlapping. We investigate how different software optimizations and porting to non general-purpose hardware architectures improve code scalability and execution times considering, as an example, the FALL3D volcanic ash transport model. To this purpose, we implement the FALL3D model equations in the WARIS framework, a software designed from scratch to solve in a parallel and efficient way different geoscience problems on a wide variety of architectures. In addition, we consider further improvements in WARIS such as hybrid MPI-OMP parallelization, spatial blocking, auto-tuning and thread affinity. Considering all these aspects together, the FALL3D execution times for a realistic test case running on general-purpose cluster architectures (Intel Sandy Bridge) decrease by a factor between 7 and 40 depending on the grid resolution. Finally, we port the application to Intel Xeon Phi (MIC) and NVIDIA GPUs (CUDA) accelerator-based architectures and compare performance, cost and power consumption on all the architectures. Implications on time-constrained operational model configurations are discussed.

  2. A study of commuter airplane design optimization

    NASA Technical Reports Server (NTRS)

    Roskam, J.; Wyatt, R. D.; Griswold, D. A.; Hammer, J. L.

    1977-01-01

    Problems of commuter airplane configuration design were studied to affect a minimization of direct operating costs. Factors considered were the minimization of fuselage drag, methods of wing design, and the estimated drag of an airplane submerged in a propellor slipstream; all design criteria were studied under a set of fixed performance, mission, and stability constraints. Configuration design data were assembled for application by a computerized design methodology program similar to the NASA-Ames General Aviation Synthesis Program.

  3. Mathematical programming for the efficient allocation of health care resources.

    PubMed

    Stinnett, A A; Paltiel, A D

    1996-10-01

    Previous discussions of methods for the efficient allocation of health care resources subject to a budget constraint have relied on unnecessarily restrictive assumptions. This paper makes use of established optimization techniques to demonstrate that a general mathematical programming framework can accommodate much more complex information regarding returns to scale, partial and complete indivisibility and program interdependence. Methods are also presented for incorporating ethical constraints into the resource allocation process, including explicit identification of the cost of equity.

  4. [Optimal versus maximal safety of the blood transfusion chain in The Netherlands; results of a conference. College for Blood Transfusion of the Dutch Red Cross].

    PubMed

    van der Poel, C L; de Boer, J E; Reesink, H W; Sibinga, C T

    1998-02-07

    An invitational conference was held on September 11, 1996 by the Medical Advisory Commission to the Blood Transfusion Council of the Netherlands Red Cross, addressing the issues of 'maximal' versus 'optimal' safety measures for the blood supply. Invited were blood transfusion specialists, clinicians, representatives of patient interest groups, the Ministry and Inspectorate of Health and members of parliament. Transfusion experts and clinicians were found to advocate an optimal course, following strategies of evidence-based medicine, cost-benefit analyses and medical technology assessment. Patient groups depending on blood products, such as haemophilia patients would rather opt for maximal safety. Insurance companies would choose likewise, to exclude any risk if possible. Health care juridical advisers would advise to choose for optimal safety, but to reserve funds covering the differences with 'maximal safety' in case of litigation. Politicians and the general public would sooner choose for maximal rather than optimal security. The overall impression persists that however small the statistical risk may be, in the eyes of many it is unacceptable. This view is very stubborn.

  5. Algorithm For Optimal Control Of Large Structures

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Garba, John A..; Utku, Senol

    1989-01-01

    Cost of computation appears competitive with other methods. Problem to compute optimal control of forced response of structure with n degrees of freedom identified in terms of smaller number, r, of vibrational modes. Article begins with Hamilton-Jacobi formulation of mechanics and use of quadratic cost functional. Complexity reduced by alternative approach in which quadratic cost functional expressed in terms of control variables only. Leads to iterative solution of second-order time-integral matrix Volterra equation of second kind containing optimal control vector. Cost of algorithm, measured in terms of number of computations required, is of order of, or less than, cost of prior algoritms applied to similar problems.

  6. A system-level cost-of-energy wind farm layout optimization with landowner modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Le; MacDonald, Erin

    This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under twomore » land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.« less

  7. A Method of Dynamic Extended Reactive Power Optimization in Distribution Network Containing Photovoltaic-Storage System

    NASA Astrophysics Data System (ADS)

    Wang, Wu; Huang, Wei; Zhang, Yongjun

    2018-03-01

    The grid-integration of Photovoltaic-Storage System brings some undefined factors to the network. In order to make full use of the adjusting ability of Photovoltaic-Storage System (PSS), this paper puts forward a reactive power optimization model, which are used to construct the objective function based on power loss and the device adjusting cost, including energy storage adjusting cost. By using Cataclysmic Genetic Algorithm to solve this optimization problem, and comparing with other optimization method, the result proved that: the method of dynamic extended reactive power optimization this article puts forward, can enhance the effect of reactive power optimization, including reducing power loss and device adjusting cost, meanwhile, it gives consideration to the safety of voltage.

  8. Development of coordination system model on single-supplier multi-buyer for multi-item supply chain with probabilistic demand

    NASA Astrophysics Data System (ADS)

    Olivia, G.; Santoso, A.; Prayogo, D. N.

    2017-11-01

    Nowadays, the level of competition between supply chains is getting tighter and a good coordination system between supply chains members is very crucial in solving the issue. This paper focused on a model development of coordination system between single supplier and buyers in a supply chain as a solution. Proposed optimization model was designed to determine the optimal number of deliveries from a supplier to buyers in order to minimize the total cost over a planning horizon. Components of the total supply chain cost consist of transportation costs, handling costs of supplier and buyers and also stock out costs. In the proposed optimization model, the supplier can supply various types of items to retailers whose item demand patterns are probabilistic. Sensitivity analysis of the proposed model was conducted to test the effect of changes in transport costs, handling costs and production capacities of the supplier. The results of the sensitivity analysis showed a significant influence on the changes in the transportation cost, handling costs and production capacity to the decisions of the optimal numbers of product delivery for each item to the buyers.

  9. Optimal design of green and grey stormwater infrastructure for small urban catchment based on life-cycle cost-effectiveness analysis

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Chui, T. F. M.

    2016-12-01

    Green infrastructure (GI) is identified as sustainable and environmentally friendly alternatives to the conventional grey stormwater infrastructure. Commonly used GI (e.g. green roof, bioretention, porous pavement) can provide multifunctional benefits, e.g. mitigation of urban heat island effects, improvements in air quality. Therefore, to optimize the design of GI and grey drainage infrastructure, it is essential to account for their benefits together with the costs. In this study, a comprehensive simulation-optimization modelling framework that considers the economic and hydro-environmental aspects of GI and grey infrastructure for small urban catchment applications is developed. Several modelling tools (i.e., EPA SWMM model, the WERF BMP and LID Whole Life Cycle Cost Modelling Tools) and optimization solvers are coupled together to assess the life-cycle cost-effectiveness of GI and grey infrastructure, and to further develop optimal stormwater drainage solutions. A typical residential lot in New York City is examined as a case study. The life-cycle cost-effectiveness of various GI and grey infrastructure are first examined at different investment levels. The results together with the catchment parameters are then provided to the optimization solvers, to derive the optimal investment and contributing area of each type of the stormwater controls. The relationship between the investment and optimized environmental benefit is found to be nonlinear. The optimized drainage solutions demonstrate that grey infrastructure is preferred at low total investments while more GI should be adopted at high investments. The sensitivity of the optimized solutions to the prices the stormwater controls is evaluated and is found to be highly associated with their utilizations in the base optimization case. The overall simulation-optimization framework can be easily applied to other sites world-wide, and to be further developed into powerful decision support systems.

  10. Periodic Application of Stochastic Cost Optimization Methodology to Achieve Remediation Objectives with Minimized Life Cycle Cost

    NASA Astrophysics Data System (ADS)

    Kim, U.; Parker, J.

    2016-12-01

    Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.

  11. The real cost of training health professionals in Australia: it costs as much to build a dietician workforce as a dental workforce

    PubMed Central

    Marsh, Claire; Heyes, Rob

    2016-01-01

    Objectives We explored the real cost of training the workforce in a range of primary health care professions in Australia with a focus on the impact of retention to contribute to the debate on how best to achieve the optimal health workforce mix. Methods The cost to train an entry-level health professional across 12 disciplines was derived from university fees, payment for clinical placements and, where relevant, cost of internship, adjusted for student drop-out. Census data were used to identify the number of qualified professionals working in their profession over a working life and to model expected years of practice by discipline. Data were combined to estimate the mean cost of training a health professional per year of service in their occupation. Results General medical graduates were the most expensive to train at $451,000 per completing student and a mean cost of $18,400 per year of practice (expected 24.5 years in general practice), while dentistry also had a high training cost of $352,180 but an estimated costs of $11,140 per year of practice (based on an expected 31.6 years in practice). Training costs are similar for dieticians and podiatrists, but because of differential workforce retention (mean 14.9 vs 31.5 years), the cost of training per year of clinical practice is twice as high for dieticians ($10,300 vs. $5200), only 8% lower than that for dentistry. Conclusions Return on investment in training across professions is highly variable, with expected time in the profession as important as the direct training cost. These results can indicate where increased retention and/or attracting trained professionals to return to practice should be the focus of any supply expansion versus increasing the student cohort. PMID:28429975

  12. The price of conserving avian phylogenetic diversity: a global prioritization approach.

    PubMed

    Nunes, Laura A; Turvey, Samuel T; Rosindell, James

    2015-02-19

    The combination of rapid biodiversity loss and limited funds available for conservation represents a major global concern. While there are many approaches for conservation prioritization, few are framed as financial optimization problems. We use recently published avian data to conduct a global analysis of the financial resources required to conserve different quantities of phylogenetic diversity (PD). We introduce a new prioritization metric (ADEPD) that After Downlisting a species gives the Expected Phylogenetic Diversity at some future time. Unlike other metrics, ADEPD considers the benefits to future PD associated with downlisting a species (e.g. moving from Endangered to Vulnerable in the International Union for Conservation of Nature Red List). Combining ADEPD scores with data on the financial cost of downlisting different species provides a cost-benefit prioritization approach for conservation. We find that under worst-case spending $3915 can save 1 year of PD, while under optimal spending $1 can preserve over 16.7 years of PD. We find that current conservation spending patterns are only expected to preserve one quarter of the PD that optimal spending could achieve with the same total budget. Maximizing PD is only one approach within the wider goal of biodiversity conservation, but our analysis highlights more generally the danger involved in uninformed spending of limited resources.

  13. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  14. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  15. Cost-effectiveness Analysis with Influence Diagrams.

    PubMed

    Arias, M; Díez, F J

    2015-01-01

    Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.

  16. Cost-effectiveness analysis of policy instruments for greenhouse gas emission mitigation in the agricultural sector.

    PubMed

    Bakam, Innocent; Balana, Bedru Babulo; Matthews, Robin

    2012-12-15

    Market-based policy instruments to reduce greenhouse gas (GHG) emissions are generally considered more appropriate than command and control tools. However, the omission of transaction costs from policy evaluations and decision-making processes may result in inefficiency in public resource allocation and sub-optimal policy choices and outcomes. This paper aims to assess the relative cost-effectiveness of market-based GHG mitigation policy instruments in the agricultural sector by incorporating transaction costs. Assuming that farmers' responses to mitigation policies are economically rationale, an individual-based model is developed to study the relative performances of an emission tax, a nitrogen fertilizer tax, and a carbon trading scheme using farm data from the Scottish farm account survey (FAS) and emissions and transaction cost data from literature metadata survey. Model simulations show that none of the three schemes could be considered the most cost effective in all circumstances. The cost effectiveness depends both on the tax rate and the amount of free permits allocated to farmers. However, the emissions trading scheme appears to outperform both other policies in realistic scenarios. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Optimal control of epidemic information dissemination over networks.

    PubMed

    Chen, Pin-Yu; Cheng, Shin-Ming; Chen, Kwang-Cheng

    2014-12-01

    Information dissemination control is of crucial importance to facilitate reliable and efficient data delivery, especially in networks consisting of time-varying links or heterogeneous links. Since the abstraction of information dissemination much resembles the spread of epidemics, epidemic models are utilized to characterize the collective dynamics of information dissemination over networks. From a systematic point of view, we aim to explore the optimal control policy for information dissemination given that the control capability is a function of its distribution time, which is a more realistic model in many applications. The main contributions of this paper are to provide an analytically tractable model for information dissemination over networks, to solve the optimal control signal distribution time for minimizing the accumulated network cost via dynamic programming, and to establish a parametric plug-in model for information dissemination control. In particular, we evaluate its performance in mobile and generalized social networks as typical examples.

  18. Traveling-Wave Tube Efficiency Enhancement

    NASA Technical Reports Server (NTRS)

    Dayton, James A., Jr.

    2011-01-01

    Traveling-wave tubes (TWT's) are used to amplify microwave communication signals on virtually all NASA and commercial spacecraft. Because TWT's are a primary power user, increasing their power efficiency is important for reducing spacecraft weight and cost. NASA Glenn Research Center has played a major role in increasing TWT efficiency over the last thirty years. In particular, two types of efficiency optimization algorithms have been developed for coupled-cavity TWT's. The first is the phase-adjusted taper which was used to increase the RF power from 420 to 1000 watts and the RF efficiency from 9.6% to 22.6% for a Ka-band (29.5 GHz) TWT. This was a record efficiency at this frequency level. The second is an optimization algorithm based on simulated annealing. This improved algorithm is more general and can be used to optimize efficiency over a frequency bandwidth and to provide a robust design for very high frequency TWT's in which dimensional tolerance variations are significant.

  19. Microgravity vibration isolation: An optimal control law for the one-dimensional case

    NASA Technical Reports Server (NTRS)

    Hampton, Richard D.; Grodsinsky, Carlos M.; Allaire, Paul E.; Lewis, David W.; Knospe, Carl R.

    1991-01-01

    Certain experiments contemplated for space platforms must be isolated from the accelerations of the platform. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega) exp 4. Low frequency accelerations are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.

  20. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  1. Microgravity vibration isolation: An optimal control law for the one-dimensional case

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Grodsinsky, C. M.; Allaire, P. E.; Lewis, D. W.; Knospe, C. R.

    1991-01-01

    Certain experiments contemplated for space platforms must be isolated from the accelerations of the platforms. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward (preview) gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega)(exp 4). Low frequency accelerations (less than 50 Hz) are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.

  2. Optimization and Planning of Emergency Evacuation Routes Considering Traffic Control

    PubMed Central

    Zhang, Lijun; Wang, Zhaohua

    2014-01-01

    Emergencies, especially major ones, happen fast, randomly, as well as unpredictably, and generally will bring great harm to people's life and the economy. Therefore, governments and lots of professionals devote themselves to taking effective measures and providing optimal evacuation plans. This paper establishes two different emergency evacuation models on the basis of the maximum flow model (MFM) and the minimum-cost maximum flow model (MC-MFM), and proposes corresponding algorithms for the evacuation from one source node to one designated destination (one-to-one evacuation). Ulteriorly, we extend our evaluation model from one source node to many designated destinations (one-to-many evacuation). At last, we make case analysis of evacuation optimization and planning in Beijing, and obtain the desired evacuation routes and effective traffic control measures from the perspective of sufficiency and practicability. Both analytical and numerical results support that our models are feasible and practical. PMID:24991636

  3. Communication: Calculation of interatomic forces and optimization of molecular geometry with auxiliary-field quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Motta, Mario; Zhang, Shiwei

    2018-05-01

    We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.

  4. Integrated, multidisciplinary care for hand eczema: design of a randomized controlled trial and cost-effectiveness study

    PubMed Central

    2009-01-01

    Background The individual and societal burden of hand eczema is high. Literature indicates that moderate to severe hand eczema is a disease with a poor prognosis. Many patients are hampered in their daily activities, including work. High costs are related to high medical consumption, productivity loss and sick leave. Usual care is suboptimal, due to a lack of optimal instruction and coordination of care, and communication with the general practitioner/occupational physician and people involved at the workplace. Therefore, an integrated, multidisciplinary intervention involving a dermatologist, a care manager, a specialized nurse and a clinical occupational physician was developed. This paper describes the design of a study to investigate the effectiveness and cost-effectiveness of integrated care for hand eczema by a multidisciplinary team, coordinated by a care manager, consisting of instruction on avoiding relevant contact factors, both in the occupational and in the private environment, optimal skin care and treatment, compared to usual, dermatologist-led care. Methods The study is a multicentre, randomized, controlled trial with an economic evaluation alongside. The study population consists of patients with chronic, moderate to severe hand eczema, who visit an outpatient clinic of one of the participating 5 (three university and two general) hospitals. Integrated, multidisciplinary care, coordinated by a care manager, including allergo-dermatological evaluation by a dermatologist, occupational intervention by a clinical occupational physician, and counselling by a specialized nurse on optimizing topical treatment and skin care will be compared with usual care by a dermatologist. The primary outcome measure is the cumulative difference in reduction of the clinical severity score HECSI between the groups. Secondary outcome measures are the patient's global assessment, specific quality of life with regard to the hands, generic quality of life, sick leave and patient satisfaction. An economic evaluation will be conducted alongside the RCT. Direct and indirect costs will be measured. Outcome measures will be assessed at baseline and after 4, 12, 26 and 52 weeks. All statistical analyses will be performed on the intention-to-treat principle. In addition, per protocol analyses will be carried out. Discussion To improve societal participation of patients with moderate to severe hand eczema, an integrated care intervention was developed involving both person-related and environmental factors. Such integrated care is expected to improve the patients' clinical signs, quality of life and to reduce sick leave and medical costs. Results will become available in 2011. PMID:19951404

  5. Cost Scaling of a Real-World Exhaust Waste Heat Recovery Thermoelectric Generator: A Deeper Dive

    NASA Technical Reports Server (NTRS)

    Hendricks, Terry J.; Yee, Shannon; LeBlanc, Saniya

    2015-01-01

    Cost is equally important to power density or efficiency for the adoption of waste heat recovery thermoelectric generators (TEG) in many transportation and industrial energy recovery applications. In many cases the system design that minimizes cost (e.g., the $/W value) can be very different than the design that maximizes the system's efficiency or power density, and it is important to understand the relationship between those designs to optimize TEG performance-cost compromises. Expanding on recent cost analysis work and using more detailed system modeling, an enhanced cost scaling analysis of a waste heat recovery thermoelectric generator with more detailed, coupled treatment of the heat exchangers has been performed. In this analysis, the effect of the heat lost to the environment and updated relationships between the hot-side and cold-side conductances that maximize power output are considered. This coupled thermal and thermoelectric treatment of the exhaust waste heat recovery thermoelectric generator yields modified cost scaling and design optimization equations, which are now strongly dependent on the heat leakage fraction, exhaust mass flow rate, and heat exchanger effectiveness. This work shows that heat exchanger costs most often dominate the overall TE system costs, that it is extremely difficult to escape this regime, and in order to achieve TE system costs of $1/W it is necessary to achieve heat exchanger costs of $1/(W/K). Minimum TE system costs per watt generally coincide with maximum power points, but Preferred TE Design Regimes are identified where there is little cost penalty for moving into regions of higher efficiency and slightly lower power outputs. These regimes are closely tied to previously-identified low cost design regimes. This work shows that the optimum fill factor Fopt minimizing system costs decreases as heat losses increase, and increases as exhaust mass flow rate and heat exchanger effectiveness increase. These findings have profound implications on the design and operation of various thermoelectric (TE) waste heat 3 recovery systems. This work highlights the importance of heat exchanger costs on the overall TEG system costs, quantifies the possible TEG performance-cost domain space based on heat exchanger effects, and provides a focus for future system research and development efforts.

  6. Fuzzy multi-objective optimization case study based on an anaerobic co-digestion process of food waste leachate and piggery wastewater.

    PubMed

    Choi, Angelo Earvin Sy; Park, Hung Suck

    2018-06-20

    This paper presents the development and evaluation of fuzzy multi-objective optimization for decision-making that includes the process optimization of anaerobic digestion (AD) process. The operating cost criteria which is a fundamental research gap in previous AD analysis was integrated for the case study in this research. In this study, the mixing ratio of food waste leachate (FWL) and piggery wastewater (PWW), calcium carbonate (CaCO 3 ) and sodium chloride (NaCl) concentrations were optimized to enhance methane production while minimizing operating cost. The results indicated a maximum of 63.3% satisfaction for both methane production and operating cost under the following optimal conditions: mixing ratio (FWL: PWW) - 1.4, CaCO 3 - 2970.5 mg/L and NaCl - 2.7 g/L. In multi-objective optimization, the specific methane yield (SMY) was 239.0 mL CH 4 /g VS added , while 41.2% volatile solids reduction (VSR) was obtained at an operating cost of 56.9 US$/ton. In comparison with the previous optimization study that utilized the response surface methodology, the SMY, VSR and operating cost of the AD process were 310 mL/g, 54% and 83.2 US$/ton, respectively. The results from multi-objective fuzzy optimization proves to show the potential application of this technique for practical decision-making in the process optimization of AD process. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  8. Class-specific Error Bounds for Ensemble Classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prenger, R; Lemmond, T; Varshney, K

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missedmore » detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.« less

  9. OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE

    NASA Technical Reports Server (NTRS)

    Lee, H.

    1994-01-01

    For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.

  10. An Innovative Time-Cost-Quality Tradeoff Modeling of Building Construction Project Based on Resource Allocation

    PubMed Central

    2014-01-01

    The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated. PMID:24672351

  11. Using simulation to improve wildlife surveys: Wintering mallards in Mississippi, USA

    USGS Publications Warehouse

    Pearse, A.T.; Reinecke, K.J.; Dinsmore, S.J.; Kaminski, R.M.

    2009-01-01

    Wildlife conservation plans generally require reliable data about population abundance and density. Aerial surveys often can provide these data; however, associated costs necessitate designing and conducting surveys efficiently. We developed methods to simulate population distributions of mallards (Anas platyrhynchos) wintering in western Mississippi, USA, by combining bird observations from three previous strip-transect surveys and habitat data from three sets of satellite images representing conditions when surveys were conducted. For each simulated population distribution, we compared 12 primary survey designs and two secondary design options by using coefficients of variation (CV) of population indices as the primary criterion for assessing survey performance. In all, 3 of the 12 primary designs provided the best precision (CV???11.7%) and performed equally well (WR08082E1d.gif diff???0.6%). Features of the designs that provided the largest gains in precision were optimal allocation of sample effort among strata and configuring the study area into five rather than four strata, to more precisely estimate mallard indices in areas of consistently high density. Of the two secondary design options, we found including a second observer to double the size of strip transects increased precision or decreased costs, whereas ratio estimation using auxiliary habitat data from satellite images did not increase precision appreciably. We recommend future surveys of mallard populations in our study area use the strata we developed, optimally allocate samples among strata, employ PPS or EPS sampling, and include two observers when qualified staff are available. More generally, the methods we developed to simulate population distributions from prior survey data provide a cost-effective method to assess performance of alternative wildlife surveys critical to informing management decisions, and could be extended to account for effects of detectability on estimates of true abundance. ?? 2009 CSIRO.

  12. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  13. Siting and sizing of distributed generators based on improved simulated annealing particle swarm optimization.

    PubMed

    Su, Hongsheng

    2017-12-18

    Distributed power grids generally contain multiple diverse types of distributed generators (DGs). Traditional particle swarm optimization (PSO) and simulated annealing PSO (SA-PSO) algorithms have some deficiencies in site selection and capacity determination of DGs, such as slow convergence speed and easily falling into local trap. In this paper, an improved SA-PSO (ISA-PSO) algorithm is proposed by introducing crossover and mutation operators of genetic algorithm (GA) into SA-PSO, so that the capabilities of the algorithm are well embodied in global searching and local exploration. In addition, diverse types of DGs are made equivalent to four types of nodes in flow calculation by the backward or forward sweep method, and reactive power sharing principles and allocation theory are applied to determine initial reactive power value and execute subsequent correction, thus providing the algorithm a better start to speed up the convergence. Finally, a mathematical model of the minimum economic cost is established for the siting and sizing of DGs under the location and capacity uncertainties of each single DG. Its objective function considers investment and operation cost of DGs, grid loss cost, annual purchase electricity cost, and environmental pollution cost, and the constraints include power flow, bus voltage, conductor current, and DG capacity. Through applications in an IEEE33-node distributed system, it is found that the proposed method can achieve desirable economic efficiency and safer voltage level relative to traditional PSO and SA-PSO algorithms, and is a more effective planning method for the siting and sizing of DGs in distributed power grids.

  14. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  15. 4E analysis and multi objective optimization of a micro gas turbine and solid oxide fuel cell hybrid combined heat and power system

    NASA Astrophysics Data System (ADS)

    Sanaye, Sepehr; Katebi, Arash

    2014-02-01

    Energy, exergy, economic and environmental (4E) analysis and optimization of a hybrid solid oxide fuel cell and micro gas turbine (SOFC-MGT) system for use as combined generation of heat and power (CHP) is investigated in this paper. The hybrid system is modeled and performance related results are validated using available data in literature. Then a multi-objective optimization approach based on genetic algorithm is incorporated. Eight system design parameters are selected for the optimization procedure. System exergy efficiency and total cost rate (including capital or investment cost, operational cost and penalty cost of environmental emissions) are the two objectives. The effects of fuel unit cost, capital investment and system power output on optimum design parameters are also investigated. It is observed that the most sensitive and important design parameter in the hybrid system is fuel cell current density which has a significant effect on the balance between system cost and efficiency. The selected design point from the Pareto distribution of optimization results indicates a total system exergy efficiency of 60.7%, with estimated electrical energy cost 0.057 kW-1 h-1, and payback period of about 6.3 years for the investment.

  16. Bridging the gap between health and non-health investments: moving from cost-effectiveness analysis to a return on investment approach across sectors of economy.

    PubMed

    Sendi, Pedram

    2008-06-01

    When choosing from a menu of treatment alternatives, the optimal treatment depends on the objective function and the assumptions of the model. The classical decision rule of cost-effectiveness analysis may be formulated via two different objective functions: (i) maximising health outcomes subject to the budget constraint or (ii) maximising the net benefit of the intervention with the budget being determined ex post. We suggest a more general objective function of (iii) maximising return on investment from available resources with consideration of health and non-health investments. The return on investment approach allows to adjust the analysis for the benefits forgone by alternative non-health investments from a societal or subsocietal perspective. We show that in the presence of positive returns on non-health investments the decision-maker's willingness to pay per unit of effect for a treatment program needs to be higher than its incremental cost-effectiveness ratio to be considered cost-effective.

  17. The importance of fixed costs in animal health systems.

    PubMed

    Tisdell, C A; Adamson, D

    2017-04-01

    In this paper, the authors detail the structure and optimal management of health systems as influenced by the presence and level of fixed costs. Unlike variable costs, fixed costs cannot be altered, and are thus independent of the level of veterinary activity in the short run. Their importance is illustrated by using both single-period and multi-period models. It is shown that multi-stage veterinary decision-making can often be envisaged as a sequence of fixed-cost problems. In general, it becomes clear that, the higher the fixed costs, the greater the net benefit of veterinary activity must be, if such activity is to be economic. The authors also assess the extent to which it pays to reduce fixed costs and to try to compensate for this by increasing variable costs. Fixed costs have major implications for the industrial structure of the animal health products industry and for the structure of the private veterinary services industry. In the former, they favour market concentration and specialisation in the supply of products. In the latter, they foster increased specialisation. While cooperation by individual farmers may help to reduce their individual fixed costs, the organisational difficulties and costs involved in achieving this cooperation can be formidable. In such cases, the only solution is government provision of veterinary services. Moreover, international cooperation may be called for. Fixed costs also influence the nature of the provision of veterinary education.

  18. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  19. Integrated strategic and tactical biomass-biofuel supply chain optimization.

    PubMed

    Lin, Tao; Rodríguez, Luis F; Shastri, Yogendra N; Hansen, Alan C; Ting, K C

    2014-03-01

    To ensure effective biomass feedstock provision for large-scale biofuel production, an integrated biomass supply chain optimization model was developed to minimize annual biomass-ethanol production costs by optimizing both strategic and tactical planning decisions simultaneously. The mixed integer linear programming model optimizes the activities range from biomass harvesting, packing, in-field transportation, stacking, transportation, preprocessing, and storage, to ethanol production and distribution. The numbers, locations, and capacities of facilities as well as biomass and ethanol distribution patterns are key strategic decisions; while biomass production, delivery, and operating schedules and inventory monitoring are key tactical decisions. The model was implemented to study Miscanthus-ethanol supply chain in Illinois. The base case results showed unit Miscanthus-ethanol production costs were $0.72L(-1) of ethanol. Biorefinery related costs accounts for 62% of the total costs, followed by biomass procurement costs. Sensitivity analysis showed that a 50% reduction in biomass yield would increase unit production costs by 11%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Life-cycle cost as basis to optimize waste collection in space and time: A methodology for obtaining a detailed cost breakdown structure.

    PubMed

    Sousa, Vitor; Dias-Ferreira, Celia; Vaz, João M; Meireles, Inês

    2018-05-01

    Extensive research has been carried out on waste collection costs mainly to differentiate costs of distinct waste streams and spatial optimization of waste collection services (e.g. routes, number, and location of waste facilities). However, waste collection managers also face the challenge of optimizing assets in time, for instance deciding when to replace and how to maintain, or which technological solution to adopt. These issues require a more detailed knowledge about the waste collection services' cost breakdown structure. The present research adjusts the methodology for buildings' life-cycle cost (LCC) analysis, detailed in the ISO 15686-5:2008, to the waste collection assets. The proposed methodology is then applied to the waste collection assets owned and operated by a real municipality in Portugal (Cascais Ambiente - EMAC). The goal is to highlight the potential of the LCC tool in providing a baseline for time optimization of the waste collection service and assets, namely assisting on decisions regarding equipment operation and replacement.

  1. Socially optimal electric driving range of plug-in hybrid electric vehicles

    DOE PAGES

    Kontou, Eleftheria; Yin, Yafeng; Lin, Zhenhong

    2015-07-25

    Our study determines the optimal electric driving range of plug-in hybrid electric vehicles (PHEVs) that minimizes the daily cost borne by the society when using this technology. An optimization framework is developed and applied to datasets representing the US market. Results indicate that the optimal range is 16 miles with an average social cost of 3.19 per day when exclusively charging at home, compared to 3.27 per day of driving a conventional vehicle. The optimal range is found to be sensitive to the cost of battery packs and the price of gasoline. Moreover, when workplace charging is available, the optimalmore » electric driving range surprisingly increases from 16 to 22 miles, as larger batteries would allow drivers to better take advantage of the charging opportunities to achieve longer electrified travel distances, yielding social cost savings. If workplace charging is available, the optimal density is to deploy a workplace charger for every 3.66 vehicles. Finally, the diversification of the battery size, i.e., introducing a pair and triple of electric driving ranges to the market, could further decrease the average societal cost per PHEV by 7.45% and 11.5% respectively.« less

  2. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  3. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  4. Airfoil optimization by the one-shot method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1994-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  5. Model-based optimal design of active cool thermal energy storage for maximal life-cycle cost saving from demand management in commercial buildings

    DOE PAGES

    Cui, Borui; Gao, Dian-ce; Xiao, Fu; ...

    2016-12-23

    This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less

  6. Model-based optimal design of active cool thermal energy storage for maximal life-cycle cost saving from demand management in commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Borui; Gao, Dian-ce; Xiao, Fu

    This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less

  7. Satellite scheduling considering maximum observation coverage time and minimum orbital transfer fuel cost

    NASA Astrophysics Data System (ADS)

    Zhu, Kai-Jian; Li, Jun-Feng; Baoyin, He-Xi

    2010-01-01

    In case of an emergency like the Wenchuan earthquake, it is impossible to observe a given target on earth by immediately launching new satellites. There is an urgent need for efficient satellite scheduling within a limited time period, so we must find a way to reasonably utilize the existing satellites to rapidly image the affected area during a short time period. Generally, the main consideration in orbit design is satellite coverage with the subsatellite nadir point as a standard of reference. Two factors must be taken into consideration simultaneously in orbit design, i.e., the maximum observation coverage time and the minimum orbital transfer fuel cost. The local time of visiting the given observation sites must satisfy the solar radiation requirement. When calculating the operational orbit elements as optimal parameters to be evaluated, we obtain the minimum objective function by comparing the results derived from the primer vector theory with those derived from the Hohmann transfer because the operational orbit for observing the disaster area with impulse maneuvers is considered in this paper. The primer vector theory is utilized to optimize the transfer trajectory with three impulses and the Hohmann transfer is utilized for coplanar and small inclination of non-coplanar cases. Finally, we applied this method in a simulation of the rescue mission at Wenchuan city. The results of optimizing orbit design with a hybrid PSO and DE algorithm show that the primer vector and Hohmann transfer theory proved to be effective methods for multi-object orbit optimization.

  8. Network Community Detection based on the Physarum-inspired Computational Framework.

    PubMed

    Gao, Chao; Liang, Mingxin; Li, Xianghua; Zhang, Zili; Wang, Zhen; Zhou, Zhili

    2016-12-13

    Community detection is a crucial and essential problem in the structure analytics of complex networks, which can help us understand and predict the characteristics and functions of complex networks. Many methods, ranging from the optimization-based algorithms to the heuristic-based algorithms, have been proposed for solving such a problem. Due to the inherent complexity of identifying network structure, how to design an effective algorithm with a higher accuracy and a lower computational cost still remains an open problem. Inspired by the computational capability and positive feedback mechanism in the wake of foraging process of Physarum, which is a large amoeba-like cell consisting of a dendritic network of tube-like pseudopodia, a general Physarum-based computational framework for community detection is proposed in this paper. Based on the proposed framework, the inter-community edges can be identified from the intra-community edges in a network and the positive feedback of solving process in an algorithm can be further enhanced, which are used to improve the efficiency of original optimization-based and heuristic-based community detection algorithms, respectively. Some typical algorithms (e.g., genetic algorithm, ant colony optimization algorithm, and Markov clustering algorithm) and real-world datasets have been used to estimate the efficiency of our proposed computational framework. Experiments show that the algorithms optimized by Physarum-inspired computational framework perform better than the original ones, in terms of accuracy and computational cost. Moreover, a computational complexity analysis verifies the scalability of our framework.

  9. A genetic technique for planning a control sequence to navigate the state space with a quasi-minimum-cost output trajectory for a non-linear multi-dimnensional system

    NASA Technical Reports Server (NTRS)

    Hein, C.; Meystel, A.

    1994-01-01

    There are many multi-stage optimization problems that are not easily solved through any known direct method when the stages are coupled. For instance, we have investigated the problem of planning a vehicle's control sequence to negotiate obstacles and reach a goal in minimum time. The vehicle has a known mass, and the controlling forces have finite limits. We have developed a technique that finds admissible control trajectories which tend to minimize the vehicle's transit time through the obstacle field. The immediate applications is that of a space robot which must rapidly traverse around 2-or-3 dimensional structures via application of a rotating thruster or non-rotating on-off for such vehicles is located at the Marshall Space Flight Center in Huntsville Alabama. However, it appears that the development method is applicable to a general set of optimization problems in which the cost function and the multi-dimensional multi-state system can be any nonlinear functions, which are continuous in the operating regions. Other applications included the planning of optimal navigation pathways through a transversability graph; the planning of control input for under-water maneuvering vehicles which have complex control state-space relationships; the planning of control sequences for milling and manufacturing robots; the planning of control and trajectories for automated delivery vehicles; and the optimization and athletic training in slalom sports.

  10. Environmental Statistics and Optimal Regulation

    PubMed Central

    2014-01-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies–such as constitutive expression or graded response–for regulating protein levels in response to environmental inputs. We propose a general framework–here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient–to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones. PMID:25254493

  11. Optimal insemination and replacement decisions to minimize the cost of pathogen-specific clinical mastitis in dairy cows.

    PubMed

    Cha, E; Kristensen, A R; Hertl, J A; Schukken, Y H; Tauer, L W; Welcome, F L; Gröhn, Y T

    2014-01-01

    Mastitis is a serious production-limiting disease, with effects on milk yield, milk quality, and conception rate, and an increase in the risk of mortality and culling. The objective of this study was 2-fold: (1) to develop an economic optimization model that incorporates all the different types of pathogens that cause clinical mastitis (CM) categorized into 8 classes of culture results, and account for whether the CM was a first, second, or third case in the current lactation and whether the cow had a previous case or cases of CM in the preceding lactation; and (2) to develop this decision model to be versatile enough to add additional pathogens, diseases, or other cow characteristics as more information becomes available without significant alterations to the basic structure of the model. The model provides economically optimal decisions depending on the individual characteristics of the cow and the specific pathogen causing CM. The net returns for the basic herd scenario (with all CM included) were $507/cow per year, where the incidence of CM (cases per 100 cow-years) was 35.6, of which 91.8% of cases were recommended for treatment under an optimal replacement policy. The cost per case of CM was $216.11. The CM cases comprised (incidences, %) Staphylococcus spp. (1.6), Staphylococcus aureus (1.8), Streptococcus spp. (6.9), Escherichia coli (8.1), Klebsiella spp. (2.2), other treated cases (e.g., Pseudomonas; 1.1), other not treated cases (e.g., Trueperella pyogenes; 1.2), and negative culture cases (12.7). The average cost per case, even under optimal decisions, was greatest for Klebsiella spp. ($477), followed by E. coli ($361), other treated cases ($297), and other not treated cases ($280). This was followed by the gram-positive pathogens; among these, the greatest cost per case was due to Staph. aureus ($266), followed by Streptococcus spp. ($174) and Staphylococcus spp. ($135); negative culture had the lowest cost ($115). The model recommended treatment for most CM cases (>85%); the range was 86.2% (Klebsiella spp.) to 98.5% (Staphylococcus spp.). In general, the optimal recommended time for replacement was up to 5 mo earlier for cows with CM compared with cows without CM. Furthermore, although the parameter estimates implemented in this model are applicable to the dairy farms in this study, the parameters may be altered to be specific to other dairy farms. Cow rankings and values based on disease status, pregnancy status, and milk production can be extracted; these provide guidance when determining which cows to keep or cull. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Application of multi-objective optimization to pooled experiments of next generation sequencing for detection of rare mutations.

    PubMed

    Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario

    2014-01-01

    In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.

  13. Optimization of joint energy micro-grid with cold storage

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Luo, Simin; Tian, Yan; Chen, Xianda; Xiong, Botao; Zhou, Bowen

    2018-02-01

    To accommodate distributed photovoltaic (PV) curtailment, to make full use of the joint energy micro-grid with cold storage, and to reduce the high operating costs, the economic dispatch of joint energy micro-grid load is particularly important. Considering the different prices during the peak and valley durations, an optimization model is established, which takes the minimum production costs and PV curtailment fluctuations as the objectives. Linear weighted sum method and genetic-taboo Particle Swarm Optimization (PSO) algorithm are used to solve the optimization model, to obtain optimal power supply output. Taking the garlic market in Henan as an example, the simulation results show that considering distributed PV and different prices in different time durations, the optimization strategies are able to reduce the operating costs and accommodate PV power efficiently.

  14. Cost optimization for buildings with hybrid ventilation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Kun; Lu, Yan

    A method including: computing a total cost for a first zone in a building, wherein the total cost is equal to an actual energy cost of the first zone plus a thermal discomfort cost of the first zone; and heuristically optimizing the total cost to identify temperature setpoints for a mechanical heating/cooling system and a start time and an end time of the mechanical heating/cooling system, based on external weather data and occupancy data of the first zone.

  15. The effectiveness of position- and composition-specific gap costs for protein similarity searches.

    PubMed

    Stojmirović, Aleksandar; Gertz, E Michael; Altschul, Stephen F; Yu, Yi-Kuo

    2008-07-01

    The flexibility in gap cost enjoyed by hidden Markov models (HMMs) is expected to afford them better retrieval accuracy than position-specific scoring matrices (PSSMs). We attempt to quantify the effect of more general gap parameters by separately examining the influence of position- and composition-specific gap scores, as well as by comparing the retrieval accuracy of the PSSMs constructed using an iterative procedure to that of the HMMs provided by Pfam and SUPERFAMILY, curated ensembles of multiple alignments. We found that position-specific gap penalties have an advantage over uniform gap costs. We did not explore optimizing distinct uniform gap costs for each query. For Pfam, PSSMs iteratively constructed from seeds based on HMM consensus sequences perform equivalently to HMMs that were adjusted to have constant gap transition probabilities, albeit with much greater variance. We observed no effect of composition-specific gap costs on retrieval performance. These results suggest possible improvements to the PSI-BLAST protein database search program. The scripts for performing evaluations are available upon request from the authors.

  16. Neural Generalized Predictive Control: A Newton-Raphson Implementation

    NASA Technical Reports Server (NTRS)

    Soloway, Donald; Haley, Pamela J.

    1997-01-01

    An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.

  17. Replica Approach for Minimal Investment Risk with Cost

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-06-01

    In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.

  18. Choosing a reliability inspection plan for interval censored data

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela

    2017-04-19

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  19. Choosing a reliability inspection plan for interval censored data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lu; Anderson-Cook, Christine Michaela

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  20. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  1. Reliability models: the influence of model specification in generation expansion planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stremel, J.P.

    1982-10-01

    This paper is a critical evaluation of reliability methods used for generation expansion planning. It is shown that the methods for treating uncertainty are critical for determining the relative reliability value of expansion alternatives. It is also shown that the specification of the reliability model will not favor all expansion options equally. Consequently, the model is biased. In addition, reliability models should be augmented with an economic value of reliability (such as the cost of emergency procedures or energy not served). Generation expansion evaluations which ignore the economic value of excess reliability can be shown to be inconsistent. The conclusionsmore » are that, in general, a reliability model simplifies generation expansion planning evaluations. However, for a thorough analysis, the expansion options should be reviewed for candidates which may be unduly rejected because of the bias of the reliability model. And this implies that for a consistent formulation in an optimization framework, the reliability model should be replaced with a full economic optimization which includes the costs of emergency procedures and interruptions in the objective function.« less

  2. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segev, A.; Fang, W.

    In currency-based updates, processing a query to a materialized view has to satisfy a currency constraint which specifies the maximum time lag of the view data with respect to a transaction database. Currency-based update policies are more general than periodical, deferred, and immediate updates; they provide additional opportunities for optimization and allow updating a materialized view from other materialized views. In this paper, we present algorithms to determine the source and timing of view updates and validate the resulting cost savings through simulation results. 20 refs.

  4. New Developments in Uncertainty: Linking Risk Management, Reliability, Statistics and Stochastic Optimization

    DTIC Science & Technology

    2014-11-13

    Cm) in a given set C ⊂ IRm . (5.7) Motivation for generalized regression comes from applications in which Y has the cost/loss orien- tation that we have...distribution. The corresponding probability measure on IRm is induced then by the multivariate distribution function FV1,...,Vm(v1, . . . , vm) = prob { (V1...could be generated by future observations of some variables V1, . . . , Vm, as above, in which case Ω would be a subset of IRm with elements ω = (v1

  5. Advances in Highly Constrained Multi-Phase Trajectory Generation using the General Pseudospectral Optimization Software (GPOPS)

    DTIC Science & Technology

    2013-08-01

    release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by

  6. Considerations with respect to the design of solar photovoltaic power systems for terrestrial applications

    NASA Technical Reports Server (NTRS)

    Berman, P. A.

    1972-01-01

    The various factors involved in the development of solar photovoltaic power systems for terrestrial application are discussed. The discussion covers the tradeoffs, compromises, and optimization studies which must be performed in order to develop a viable terrestrial solar array system. It is concluded that the technology now exists for the fabrication of terrestrial solar arrays but that the economics are prohibitive. Various approaches to cost reduction are presented, and the general requirements for materials and processes to be used are delineated.

  7. Efficiency of using construction machines when strengthening foundation soils

    NASA Astrophysics Data System (ADS)

    Turchin, Vadim; Yudina, Ludmila; Ivanova, Tatyana; Zhilkina, Tatyana; Sychugove, Stanislav; Mackevicius, Rimantas; Danutė, Slizyte

    2017-10-01

    The article describes the efficiency of using construction machines when strengthening foundation base soils, as one of the ways to solve the problem of reducing and optimizing costs during construction. The analysis is presented in regard to inspection results of the soil bodies in the pile foundation base of “School of general education No. 5 in the town of Malgobek” of the republic of Ingushetia. Economical efficiency through reducing the duration of construction due to the automation of production is calculated.

  8. A methodology for commonality analysis, with applications to selected space station systems

    NASA Technical Reports Server (NTRS)

    Thomas, Lawrence Dale

    1989-01-01

    The application of commonality in a system represents an attempt to reduce costs by reducing the number of unique components. A formal method for conducting commonality analysis has not been established. In this dissertation, commonality analysis is characterized as a partitioning problem. The cost impacts of commonality are quantified in an objective function, and the solution is that partition which minimizes this objective function. Clustering techniques are used to approximate a solution, and sufficient conditions are developed which can be used to verify the optimality of the solution. This method for commonality analysis is general in scope. It may be applied to the various types of commonality analysis required in the conceptual, preliminary, and detail design phases of the system development cycle.

  9. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  10. Investigation of storage system designs and techniques for optimizing energy conservation in integrated utility systems. Volume 1: (Executive summary)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Integrated Utility Systems (IUS) have been suggested as a means of reducing the cost and conserving the nonrenewable energy resources required to supply utility services (energy, water, and waste disposal) to developments of limited size. The potential for further improving the performance and reducing the cost of IUS installations through the use of energy storage devices is examined and the results are summarized. Candidate energy storage concepts in the general areas of thermal, inertial, superconducting magnetic, electrochemical, chemical, and compressed air energy storage are assessed and the storage of thermal energy as the sensible heat of water is selected as the primary candidate for near term application to IUS.

  11. Low cost Ku-band earth terminals for voice/data/facsimile

    NASA Technical Reports Server (NTRS)

    Kelley, R. L.

    1977-01-01

    A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.

  12. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs

    NASA Astrophysics Data System (ADS)

    Handford, Matthew L.; Srinivasan, Manoj

    2016-02-01

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.

  13. Optimal management of a stochastically varying population when policy adjustment is costly.

    PubMed

    Boettiger, Carl; Bode, Michael; Sanchirico, James N; Lariviere, Jacob; Hastings, Alan; Armsworth, Paul R

    2016-04-01

    Ecological systems are dynamic and policies to manage them need to respond to that variation. However, policy adjustments will sometimes be costly, which means that fine-tuning a policy to track variability in the environment very tightly will only sometimes be worthwhile. We use a classic fisheries management problem, how to manage a stochastically varying population using annually varying quotas in order to maximize profit, to examine how costs of policy adjustment change optimal management recommendations. Costs of policy adjustment (changes in fishing quotas through time) could take different forms. For example, these costs may respond to the size of the change being implemented, or there could be a fixed cost any time a quota change is made. We show how different forms of policy costs have contrasting implications for optimal policies. Though it is frequently assumed that costs to adjusting policies will dampen variation in the policy, we show that certain cost structures can actually increase variation through time. We further show that failing to account for adjustment costs has a consistently worse economic impact than would assuming these costs are present when they are not.

  14. Evidence for composite cost functions in arm movement planning: an inverse optimal control approach.

    PubMed

    Berret, Bastien; Chiovetto, Enrico; Nori, Francesco; Pozzo, Thierry

    2011-10-01

    An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness.

  15. Selected Topics on Decision Making for Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Sweda, Timothy Matthew

    Electric vehicles (EVs) are an attractive alternative to conventional gasoline-powered vehicles due to their lower emissions, fuel costs, and maintenance costs. Range anxiety, or the fear of running out of charge prior to reaching one's destination, remains a significant concern, however. In this dissertation, we address the issue of range anxiety by developing a set of decision support tools for both charging infrastructure providers and EV drivers. In Chapter 1, we present an agent-based information system for identifying patterns in residential EV ownership and driving activities to enable strategic deployment of new charging infrastructure. Driver agents consider their own driving activities within the simulated environment, in addition to the presence of charging stations and the vehicle ownership of others in their social networks, when purchasing a new vehicle. The Chicagoland area is used as a case study to demonstrate the model, and several deployment scenarios are analyzed. In Chapter 2, we address the problem of finding an optimal recharging policy for an EV along a given path. The path consists of a sequence of nodes, each representing a charging station, and the driver must decide where to stop and how much to recharge at each stop. We present efficient algorithms for finding an optimal policy in general instances with deterministic travel costs and homogeneous charging stations, and also for two specialized cases. In addition, we develop two heuristic procedures that we characterize analytically and explore empirically. We further analyze and test our solution methods on model variations that include stochastic travel costs and nonhomogeneous charging stations. In Chapter 3, we study the problem of finding an optimal routing and recharging policy for an electric vehicle in a grid network. Each node in the network represents a charging station and has an associated probability of being available at any point in time or occupied by another vehicle. We present an efficient algorithm for finding an optimal a priori route and recharging policy as well as heuristic methods for finding adaptive policies. We conduct numerical experiments to demonstrate the empirical performance of our solutions.

  16. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  17. Schedule Matters: Understanding the Relationship between Schedule Delays and Costs on Overruns

    NASA Technical Reports Server (NTRS)

    Majerowicz, Walt; Shinn, Stephen A.

    2016-01-01

    This paper examines the relationship between schedule delays and cost overruns on complex projects. It is generally accepted by many project practitioners that cost overruns are directly related to schedule delays. But what does "directly related to" actually mean? Some reasons or root causes for schedule delays and associated cost overruns are obvious, if only in hindsight. For example, unrealistic estimates, supply chain difficulties, insufficient schedule margin, technical problems, scope changes, or the occurrence of risk events can negatively impact schedule performance. Other factors driving schedule delays and cost overruns may be less obvious and more difficult to quantify. Examples of these less obvious factors include project complexity, flawed estimating assumptions, over-optimism, political factors, "black swan" events, or even poor leadership and communication. Indeed, is it even possible the schedule itself could be a source of delay and subsequent cost overrun? Through literature review, surveys of project practitioners, and the authors' own experience on NASA programs and projects, the authors will categorize and examine the various factors affecting the relationship between project schedule delays and cost growth. The authors will also propose some ideas for organizations to consider to help create an awareness of the factors which could cause or influence schedule delays and associated cost growth on complex projects.

  18. Cost Overrun Optimism: Fact or Fiction

    DTIC Science & Technology

    2016-02-29

    Base, OH. Homgren, C. T. (1990). In G. Foster (Ed.), Cost accounting : A managerial emphasis (7th ed.). Englewood Cliffs, NJ: Prentice Hall. Morrison... Accounting Office. Gansler, J. S. (1989). Affording defense. Cambridge, MA: The MIT Press. Heise, S. R. (1991). A review of cost performance index...Image designed by Diane Fleischer Cost Overrun Optimism: FACT or FICTION? Maj David D. Christensen, USAF Program managers are advocates by

  19. Coordinated and uncoordinated optimization of networks

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-06-01

    In this paper, we consider spatial networks that realize a balance between an infrastructure cost (the cost of wire needed to connect the network in space) and communication efficiency, measured by average shortest path length. A global optimization procedure yields network topologies in which this balance is optimized. These are compared with network topologies generated by a competitive process in which each node strives to optimize its own cost-communication balance. Three phases are observed in globally optimal configurations for different cost-communication trade offs: (i) regular small worlds, (ii) starlike networks, and (iii) trees with a center of interconnected hubs. In the latter regime, i.e., for very expensive wire, power laws in the link length distributions P(w)∝w-α are found, which can be explained by a hierarchical organization of the networks. In contrast, in the local optimization process the presence of sharp transitions between different network regimes depends on the dimension of the underlying space. Whereas for d=∞ sharp transitions between fully connected networks, regular small worlds, and highly cliquish periphery-core networks are found, for d=1 sharp transitions are absent and the power law behavior in the link length distribution persists over a much wider range of link cost parameters. The measured power law exponents are in agreement with the hypothesis that the locally optimized networks consist of multiple overlapping suboptimal hierarchical trees.

  20. Sizing a rainwater harvesting cistern by minimizing costs

    NASA Astrophysics Data System (ADS)

    Pelak, Norman; Porporato, Amilcare

    2016-10-01

    Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.

  1. Electronic decision support in general practice. What's the hold up?

    PubMed

    Liaw, S T; Schattner, P

    2003-11-01

    The uptake of computers in Australian general practice has been for administrative use and prescribing, but the development of electronic decision support (EDS) has been particularly slow. Therefore, computers are not being used to their full potential in assisting general practitioners to care for their patients. This article examines current barriers to EDS in general practice and possible strategies to increase its uptake. Barriers to the uptake of EDS include a lack of a business case, shifting of costs for data collection and management to the clinician, uncertainty about the optimal level of decision support, lack of technical and semantic standards, and resistance to EDS use by the time conscious GP. There is a need for a more strategic and attractive incentives program, greater national coordination, and more effective collaboration between government, the computer industry and the medical profession if current inertia is to be overcome.

  2. Development of the IBSAL-SimMOpt Method for the Optimization of Quality in a Corn Stover Supply Chain

    DOE PAGES

    Chavez, Hernan; Castillo-Villar, Krystel; Webb, Erin

    2017-08-01

    Variability on the physical characteristics of feedstock has a relevant effect on the reactor’s reliability and operating cost. Most of the models developed to optimize biomass supply chains have failed to quantify the effect of biomass quality and preprocessing operations required to meet biomass specifications on overall cost and performance. The Integrated Biomass Supply Analysis and Logistics (IBSAL) model estimates the harvesting, collection, transportation, and storage cost while considering the stochastic behavior of the field-to-biorefinery supply chain. This paper proposes an IBSAL-SimMOpt (Simulation-based Multi-Objective Optimization) method for optimizing the biomass quality and costs associated with the efforts needed to meetmore » conversion technology specifications. The method is developed in two phases. For the first phase, a SimMOpt tool that interacts with the extended IBSAL is developed. For the second phase, the baseline IBSAL model is extended so that the cost for meeting and/or penalization for failing in meeting specifications are considered. The IBSAL-SimMOpt method is designed to optimize quality characteristics of biomass, cost related to activities intended to improve the quality of feedstock, and the penalization cost. A case study based on 1916 farms in Ontario, Canada is considered for testing the proposed method. Analysis of the results demonstrates that this method is able to find a high-quality set of non-dominated solutions.« less

  3. Development of the IBSAL-SimMOpt Method for the Optimization of Quality in a Corn Stover Supply Chain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Hernan; Castillo-Villar, Krystel; Webb, Erin

    Variability on the physical characteristics of feedstock has a relevant effect on the reactor’s reliability and operating cost. Most of the models developed to optimize biomass supply chains have failed to quantify the effect of biomass quality and preprocessing operations required to meet biomass specifications on overall cost and performance. The Integrated Biomass Supply Analysis and Logistics (IBSAL) model estimates the harvesting, collection, transportation, and storage cost while considering the stochastic behavior of the field-to-biorefinery supply chain. This paper proposes an IBSAL-SimMOpt (Simulation-based Multi-Objective Optimization) method for optimizing the biomass quality and costs associated with the efforts needed to meetmore » conversion technology specifications. The method is developed in two phases. For the first phase, a SimMOpt tool that interacts with the extended IBSAL is developed. For the second phase, the baseline IBSAL model is extended so that the cost for meeting and/or penalization for failing in meeting specifications are considered. The IBSAL-SimMOpt method is designed to optimize quality characteristics of biomass, cost related to activities intended to improve the quality of feedstock, and the penalization cost. A case study based on 1916 farms in Ontario, Canada is considered for testing the proposed method. Analysis of the results demonstrates that this method is able to find a high-quality set of non-dominated solutions.« less

  4. Optimal inventories for overhaul of repairable redundant systems - A Markov decision model

    NASA Technical Reports Server (NTRS)

    Schaefer, M. K.

    1984-01-01

    A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.

  5. Post-treatment of molasses wastewater by electrocoagulation and process optimization through response surface analysis.

    PubMed

    Tsioptsias, C; Petridis, D; Athanasakis, N; Lemonidis, I; Deligiannis, A; Samaras, P

    2015-12-01

    Molasses wastewater is a high strength effluent of food industry such as distilleries, sugar and yeast production plants etc. It is characterized by a dark brown color and exhibits a high content in substances of recalcitrant nature such as melanoidins. In this study, electrocoagulation (EC) was studied as a post treatment step for biologically treated molasses wastewater with high nitrogen content obtained from a baker's yeast industry. Iron and copper electrodes were used in various forms; the influence and interaction of current density, molasses wastewater dilution, and reaction time, on COD, color, ammonium and nitrate removal rates and operating cost were studied and optimized through Box Behnken's response surface analysis. Reaction time varied from 0.5 to 4 h, current density varied from 5 to 40 mA/cm(2) and dilution from 0 to 90% (v/v expressed as water concentration). pH, conductivity and temperature measurements were also carried out during each experiment. From preliminary experiments, it was concluded that the application of aeration and sample dilution, considerably influenced the kinetics of the process. The obtained results showed that COD removal varied between 10 and 54%, corresponding to an operation cost ranging from 0.2 to 33 euro/kg COD removed. Significant removal rates were obtained for nitrogen as nitrate and ammonium (i.e. 70% ammonium removal). A linear relation of COD and ammonium to the design parameters was observed, while operation cost and nitrate removal responded in a curvilinear function. A low ratio of electrode surface to treated volume was used, associated to a low investment cost; in addition, iron wastes could be utilized as low cost electrodes i.e. iron fillings from lathes, aiming to a low operation cost due to electrodes replacement. In general, electrocoagulation proved to be an effective and low cost process for biologically treated molasses-wastewater treatment for additional removal of COD and nitrogen content and color reduction. Treated effluent samples with good quality were produced by EC, with COD, NH4-N and NO3-N concentrations of 180, 52 and 2 mg/l respectively. Response surface analysis revealed that optimized conditions could be established under moderate molasses wastewater dilution, (e.g. 45%), at 3.5 h treatment time and 33 mA/cm(2) current density. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems.

    PubMed

    Li, Xuejun; Xu, Jia; Yang, Yun

    2015-01-01

    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.

  7. Analysis and optimization of hybrid electric vehicle thermal management systems

    NASA Astrophysics Data System (ADS)

    Hamut, H. S.; Dincer, I.; Naterer, G. F.

    2014-02-01

    In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.

  8. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems

    PubMed Central

    Li, Xuejun; Xu, Jia; Yang, Yun

    2015-01-01

    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510

  9. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  10. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  11. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  12. Recursive Optimization of Digital Circuits

    DTIC Science & Technology

    1990-12-14

    Obverse- Specification . . . A-23 A.14 Non-MDS Optimization of SAMPLE .. .. .. .. .. .. ..... A-24 Appendix B . BORIS Recursive Optimization System...Software ...... B -i B .1 DESIGN.S File . .... .. .. .. .. .. .. .. .. .. ... ... B -2 B .2 PARSE.S File. .. .. .. .. .. .. .. .. ... .. ... .... B -1i B .3...TABULAR.S File. .. .. .. .. .. .. ... .. ... .. ... B -22 B .4 MDS.S File. .. .. .. .. .. .. .. ... .. ... .. ...... B -28 B .5 COST.S File

  13. "Optimal" Size and Schooling: A Relative Concept.

    ERIC Educational Resources Information Center

    Swanson, Austin D.

    Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…

  14. Advanced in-duct sorbent injection for SO{sub 2} control. Topical report No. 2, Subtask 2.2: Design optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenhoover, W.A.; Stouffer, M.R.; Withum, J.A.

    1994-12-01

    The objective of this research project is to develop second-generation duct injection technology as a cost-effective SO{sub 2} control option for the 1990 Clean Air Act Amendments. Research is focused on the Advanced Coolside process, which has shown the potential for achieving the performance targets of 90% SO{sub 2} removal and 60% sorbent utilization. In Subtask 2.2, Design Optimization, process improvement was sought by optimizing sorbent recycle and by optimizing process equipment for reduced cost. The pilot plant recycle testing showed that 90% SO{sub 2} removal could be achieved at sorbent utilizations up to 75%. This testing also showed thatmore » the Advanced Coolside process has the potential to achieve very high removal efficiency (90 to greater than 99%). Two alternative contactor designs were developed, tested and optimized through pilot plant testing; the improved designs will reduce process costs significantly, while maintaining operability and performance essential to the process. Also, sorbent recycle handling equipment was optimized to reduce cost.« less

  15. Optimal design of the satellite constellation arrangement reconfiguration process

    NASA Astrophysics Data System (ADS)

    Fakoor, Mahdi; Bakhtiari, Majid; Soleymani, Mahshid

    2016-08-01

    In this article, a novel approach is introduced for the satellite constellation reconfiguration based on Lambert's theorem. Some critical problems are raised in reconfiguration phase, such as overall fuel cost minimization, collision avoidance between the satellites on the final orbital pattern, and necessary maneuvers for the satellites in order to be deployed in the desired position on the target constellation. To implement the reconfiguration phase of the satellite constellation arrangement at minimal cost, the hybrid Invasive Weed Optimization/Particle Swarm Optimization (IWO/PSO) algorithm is used to design sub-optimal transfer orbits for the satellites existing in the constellation. Also, the dynamic model of the problem will be modeled in such a way that, optimal assignment of the satellites to the initial and target orbits and optimal orbital transfer are combined in one step. Finally, we claim that our presented idea i.e. coupled non-simultaneous flight of satellites from the initial orbital pattern will lead to minimal cost. The obtained results show that by employing the presented method, the cost of reconfiguration process is reduced obviously.

  16. Optimal control of malaria: combining vector interventions and drug therapies.

    PubMed

    Khamis, Doran; El Mouden, Claire; Kura, Klodeta; Bonsall, Michael B

    2018-04-24

    The sterile insect technique and transgenic equivalents are considered promising tools for controlling vector-borne disease in an age of increasing insecticide and drug-resistance. Combining vector interventions with artemisinin-based therapies may achieve the twin goals of suppressing malaria endemicity while managing artemisinin resistance. While the cost-effectiveness of these controls has been investigated independently, their combined usage has not been dynamically optimized in response to ecological and epidemiological processes. An optimal control framework based on coupled models of mosquito population dynamics and malaria epidemiology is used to investigate the cost-effectiveness of combining vector control with drug therapies in homogeneous environments with and without vector migration. The costs of endemic malaria are weighed against the costs of administering artemisinin therapies and releasing modified mosquitoes using various cost structures. Larval density dependence is shown to reduce the cost-effectiveness of conventional sterile insect releases compared with transgenic mosquitoes with a late-acting lethal gene. Using drug treatments can reduce the critical vector control release ratio necessary to cause disease fadeout. Combining vector control and drug therapies is the most effective and efficient use of resources, and using optimized implementation strategies can substantially reduce costs.

  17. Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey Dewayne

    2004-01-01

    The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.

  18. Technical and economical optimization of a full-scale poultry manure treatment process: total ammonia nitrogen balance.

    PubMed

    Alejo-Alvarez, Luz; Guzmán-Fierro, Víctor; Fernández, Katherina; Roeckel, Marlene

    2016-11-01

    A full-scale process for the treatment of 80 tons per day of poultry manure was designed and optimized. A total ammonia nitrogen (TAN) balance was performed at steady state, considering the stoichiometry and the kinetic data from the anaerobic digestion and the anaerobic ammonia oxidation. The equipment, reactor design, investment costs, and operational costs were considered. The volume and cost objective functions optimized the process in terms of three variables: the water recycle ratio, the protein conversion during AD, and the TAN conversion in the process. The processes were compared with and without water recycle; savings of 70% and 43% in the annual fresh water consumption and the heating costs, respectively, were achieved. The optimal process complies with the Chilean environmental legislation limit of 0.05 g total nitrogen/L.

  19. Coastal Adaptation Planning for Sea Level Rise and Extremes: A Global Model for Adaptation Decision-making at the Local Level Given Uncertain Climate Projections

    NASA Astrophysics Data System (ADS)

    Turner, D.

    2014-12-01

    Understanding the potential economic and physical impacts of climate change on coastal resources involves evaluating a number of distinct adaptive responses. This paper presents a tool for such analysis, a spatially-disaggregated optimization model for adaptation to sea level rise (SLR) and storm surge, the Coastal Impact and Adaptation Model (CIAM). This decision-making framework fills a gap between very detailed studies of specific locations and overly aggregate global analyses. While CIAM is global in scope, the optimal adaptation strategy is determined at the local level, evaluating over 12,000 coastal segments as described in the DIVA database (Vafeidis et al. 2006). The decision to pursue a given adaptation measure depends on local socioeconomic factors like income, population, and land values and how they develop over time, relative to the magnitude of potential coastal impacts, based on geophysical attributes like inundation zones and storm surge. For example, the model's decision to protect or retreat considers the costs of constructing and maintaining coastal defenses versus those of relocating people and capital to minimize damages from land inundation and coastal storms. Uncertain storm surge events are modeled with a generalized extreme value distribution calibrated to data on local surge extremes. Adaptation is optimized for the near-term outlook, in an "act then learn then act" framework that is repeated over the model time horizon. This framework allows the adaptation strategy to be flexibly updated, reflecting the process of iterative risk management. CIAM provides new estimates of the economic costs of SLR; moreover, these detailed results can be compactly represented in a set of adaptation and damage functions for use in integrated assessment models. Alongside the optimal result, CIAM evaluates suboptimal cases and finds that global costs could increase by an order of magnitude, illustrating the importance of adaptive capacity and coastal policy.

  20. Framework to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas

    NASA Astrophysics Data System (ADS)

    Feyen, Luc; Gorelick, Steven M.

    2005-03-01

    We propose a framework that combines simulation optimization with Bayesian decision analysis to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas. A stochastic simulation optimization management model is employed to plan regionally distributed groundwater pumping while preserving the hydroecological balance in wetland areas. Because predictions made by an aquifer model are uncertain, groundwater supply systems operate below maximum yield. Collecting data from the groundwater system can potentially reduce predictive uncertainty and increase safe water production. The price paid for improvement in water management is the cost of collecting the additional data. Efficient data collection using Bayesian decision analysis proceeds in three stages: (1) The prior analysis determines the optimal pumping scheme and profit from water sales on the basis of known information. (2) The preposterior analysis estimates the optimal measurement locations and evaluates whether each sequential measurement will be cost-effective before it is taken. (3) The posterior analysis then revises the prior optimal pumping scheme and consequent profit, given the new information. Stochastic simulation optimization employing a multiple-realization approach is used to determine the optimal pumping scheme in each of the three stages. The cost of new data must not exceed the expected increase in benefit obtained in optimal groundwater exploitation. An example based on groundwater management practices in Florida aimed at wetland protection showed that the cost of data collection more than paid for itself by enabling a safe and reliable increase in production.

  1. Optimal joint management of a coastal aquifer and a substitute resource

    NASA Astrophysics Data System (ADS)

    Moreaux, M.; Reynaud, A.

    2004-06-01

    This article characterizes the optimal joint management of a coastal aquifer and a costly water substitute. For this purpose we use a mathematical representation of the aquifer that incorporates the displacement of the interface between the seawater and the freshwater of the aquifer. We identify the spatial cost externalities created by users on each other and we show that the optimal water supply depends on the location of users. Users located in the coastal zone exclusively use the costly substitute. Those located in the more upstream area are supplied from the aquifer. At the optimum their withdrawal must take into account the cost externalities they generate on users located downstream. Last, users located in a median zone use the aquifer with a surface transportation cost. We show that the optimum can be implemented in a decentralized economy through a very simple Pigouvian tax. Finally, the optimal and decentralized extraction policies are simulated on a very simple example.

  2. Cost Minimization for Joint Energy Management and Production Scheduling Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Shah, Rahul H.

    Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.

  3. Impact of Capital and Current Costs Changes of the Incineration Process of the Medical Waste on System Management Cost

    NASA Astrophysics Data System (ADS)

    Jolanta Walery, Maria

    2017-12-01

    The article describes optimization studies aimed at analysing the impact of capital and current costs changes of medical waste incineration on the cost of the system management and its structure. The study was conducted on the example of an analysis of the system of medical waste management in the Podlaskie Province, in north-eastern Poland. The scope of operational research carried out under the optimization study was divided into two stages of optimization calculations with assumed technical and economic parameters of the system. In the first stage, the lowest cost of functioning of the analysed system was generated, whereas in the second one the influence of the input parameter of the system, i.e. capital and current costs of medical waste incineration on economic efficiency index (E) and the spatial structure of the system was determined. Optimization studies were conducted for the following cases: with a 25% increase in capital and current costs of incineration process, followed by 50%, 75% and 100% increase. As a result of the calculations, the highest cost of system operation was achieved at the level of 3143.70 PLN/t with the assumption of 100% increase in capital and current costs of incineration process. There was an increase in the economic efficiency index (E) by about 97% in relation to run 1.

  4. Analyzing the Effect of Multi-fuel and Practical Constraints on Realistic Economic Load Dispatch using Novel Two-stage PSO

    NASA Astrophysics Data System (ADS)

    Chintalapudi, V. S.; Sirigiri, Sivanagaraju

    2017-04-01

    In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.

  5. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  6. Optimizing conjunctive use of surface water and groundwater resources with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2014-05-01

    Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.

  7. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process.

    PubMed

    Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S; Jazar, Reza N; Khayyam, Hamid

    2018-03-05

    To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large.

  8. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process

    PubMed Central

    Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S.; Jazar, Reza N.; Khayyam, Hamid

    2018-01-01

    To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large. PMID:29510592

  9. Optimal power flow with optimal placement TCSC device on 500 kV Java-Bali electrical power system using genetic Algorithm-Taguchi method

    NASA Astrophysics Data System (ADS)

    Apribowo, Chico Hermanu Brillianto; Ibrahim, Muhammad Hamka; Wicaksono, F. X. Rian

    2018-02-01

    The growing burden of the load and the complexity of the power system has had an impact on the need for optimization of power system operation. Optimal power flow (OPF) with optimal location placement and rating of thyristor controlled series capacitor (TCSC) is an effective solution used to determine the economic cost of operating the plant and regulate the power flow in the power system. The purpose of this study is to minimize the total cost of generation by placing the location and the optimal rating of TCSC using genetic algorithm-design of experiment techniques (GA-DOE). Simulation on Java-Bali system 500 kV with the amount of TCSC used by 5 compensator, the proposed method can reduce the generation cost by 0.89% compared to OPF without using TCSC.

  10. CMOST: an open-source framework for the microsimulation of colorectal cancer screening strategies.

    PubMed

    Prakash, Meher K; Lang, Brian; Heinrich, Henriette; Valli, Piero V; Bauerfeind, Peter; Sonnenberg, Amnon; Beerenwinkel, Niko; Misselwitz, Benjamin

    2017-06-05

    Colorectal cancer (CRC) is a leading cause of cancer-related mortality. CRC incidence and mortality can be reduced by several screening strategies, including colonoscopy, but randomized CRC prevention trials face significant obstacles such as the need for large study populations with long follow-up. Therefore, CRC screening strategies will likely be designed and optimized based on computer simulations. Several computational microsimulation tools have been reported for estimating efficiency and cost-effectiveness of CRC prevention. However, none of these tools is publicly available. There is a need for an open source framework to answer practical questions including testing of new screening interventions and adapting findings to local conditions. We developed and implemented a new microsimulation model, Colon Modeling Open Source Tool (CMOST), for modeling the natural history of CRC, simulating the effects of CRC screening interventions, and calculating the resulting costs. CMOST facilitates automated parameter calibration against epidemiological adenoma prevalence and CRC incidence data. Predictions of CMOST were highly similar compared to a large endoscopic CRC prevention study as well as predictions of existing microsimulation models. We applied CMOST to calculate the optimal timing of a screening colonoscopy. CRC incidence and mortality are reduced most efficiently by a colonoscopy between the ages of 56 and 59; while discounted life years gained (LYG) is maximal at 49-50 years. With a dwell time of 13 years, the most cost-effective screening is at 59 years, at $17,211 discounted USD per LYG. While cost-efficiency varied according to dwell time it did not influence the optimal time point of screening interventions within the tested range. Predictions of CMOST are highly similar compared to a randomized CRC prevention trial as well as those of other microsimulation tools. This open source tool will enable health-economics analyses in for various countries, health-care scenarios and CRC prevention strategies. CMOST is freely available under the GNU General Public License at https://gitlab.com/misselwb/CMOST.

  11. Decision support for the management of water resources at Sub-middle of the São Francisco river basin in Brazil using integrated hydro-economic modeling and scenarios for land use changes

    NASA Astrophysics Data System (ADS)

    Moraes, M. G. A.; Souza da Silva, G.

    2016-12-01

    Hydro-economic models can measure the economic effects of different operating rules, environmental restrictions, ecosystems services, technical constraints and institutional constraints. Furthermore, water allocation can be improved by considering economical criteria's. Likewise, climate and land use change can be analyzed to provide resilience. We developed and applied a hydro-economic optimization model to determine the optimal water allocation of main users in the Lower-middle São Francisco River Basin in Northeast (NE) Brazil. The model uses demand curves for the irrigation projects, small farmers and human supply, rather than fixed requirements for water resources. This study analyzed various constraints and operating alternatives for the installed hydropower dams in economic terms. A seven-year period (2000-2006) with water scarcity in the past has been selected to analyze the water availability and the associated optimal economic water allocation. The used constraints are technical, socioeconomic and environmental. The economically impacts of scenarios like prioritizing human consumption, impacts of the implementation of the São Francisco river transposition, human supply without high distribution losses, environmental hydrographs, forced reservoir level control, forced reduced reservoir capacity, alteration of lower flow restriction were analyzed. The results in this period show that scarcity costs related ecosystem service and environmental constraints are significant, and have major impacts (increase of scarcity cost) for consumptive users like irrigation projects. In addition, institutional constraints such as prioritizing human supply, minimum release limits downstream of the reservoirs and the implementation of the transposition project impact the costs and benefits of the two main economic sectors (irrigation and power generation) in the region of the Lower-middle of the São Francisco river basin. Scarcity costs for irrigation users generally increase more (in percentage terms) than the other users associated to environmental and institutional constraints.

  12. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  13. Optimizing the Scientific Yield from a Randomized Controlled Trial (RCT): Evaluating Two Behavioral Interventions and Assessment Reactivity with a Single Trial

    PubMed Central

    Carey, Michael P.; Senn, Theresa E.; Coury-Doniger, Patricia; Urban, Marguerite A.; Vanable, Peter A.; Carey, Kate B.

    2013-01-01

    Randomized controlled trials (RCTs) remain the gold standard for evaluating intervention efficacy but are often costly. To optimize their scientific yield, RCTs can be designed to investigate multiple research questions. This paper describes an RCT that used a modified Solomon four-group design to simultaneously evaluate two, theoretically-guided, health promotion interventions as well as assessment reactivity. Recruited participants (N = 1010; 56% male; 69% African American) were randomly assigned to one of four conditions formed by crossing two intervention conditions (i.e., general health promotion vs. sexual risk reduction intervention) with two assessment conditions (i.e., general health vs. sexual health survey). After completing their assigned baseline assessment, participants received the assigned intervention, and returned for follow-ups at 3, 6, 9, and 12 months. In this report, we summarize baseline data, which show high levels of sexual risk behavior; alcohol, marijuana, and tobacco use; and fast food consumption. Sexual risk behaviors and substance use were correlated. Participants reported high satisfaction with both interventions but ratings for the sexual risk reduction intervention were higher. Planned follow-up sessions, and subsequent analyses, will assess changes in health behaviors including sexual risk behaviors. This study design demonstrates one way to optimize the scientific yield of an RCT. PMID:23816489

  14. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  15. Optimal dynamic water allocation: Irrigation extractions and environmental tradeoffs in the Murray River, Australia

    NASA Astrophysics Data System (ADS)

    Grafton, R. Quentin; Chu, Hoang Long; Stewardson, Michael; Kompas, Tom

    2011-12-01

    A key challenge in managing semiarid basins, such as in the Murray-Darling in Australia, is to balance the trade-offs between the net benefits of allocating water for irrigated agriculture, and other uses, versus the costs of reduced surface flows for the environment. Typically, water planners do not have the tools to optimally and dynamically allocate water among competing uses. We address this problem by developing a general stochastic, dynamic programming model with four state variables (the drought status, the current weather, weather correlation, and current storage) and two controls (environmental release and irrigation allocation) to optimally allocate water between extractions and in situ uses. The model is calibrated to Australia's Murray River that generates: (1) a robust qualitative result that "pulse" or artificial flood events are an optimal way to deliver environmental flows over and above conveyance of base flows; (2) from 2001 to 2009 a water reallocation that would have given less to irrigated agriculture and more to environmental flows would have generated between half a billion and over 3 billion U.S. dollars in overall economic benefits; and (3) water markets increase optimal environmental releases by reducing the losses associated with reduced water diversions.

  16. Genetic algorithm-based multi-objective optimal absorber system for three-dimensional seismic structures

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Li, Hongnan; Song, Gangbing; Huo, Linsheng

    2009-03-01

    The problem of optimizing an absorber system for three-dimensional seismic structures is addressed. The objective is to determine the number and position of absorbers to minimize the coupling effects of translation-torsion of structures at minimum cost. A procedure for a multi-objective optimization problem is developed by integrating a dominance-based selection operator and a dominance-based penalty function method. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. The technique guarantees the better performing individual winning its competition, provides a slight selection pressure toward individuals and maintains diversity in the population. Moreover, due to the evaluation for individuals in each generation being finished in one run, less computational effort is taken. Penalty function methods are generally used to transform a constrained optimization problem into an unconstrained one. The dominance-based penalty function contains necessary information on non-dominated character and infeasible position of an individual, essential for success in seeking a Pareto optimal set. The proposed approach is used to obtain a set of non-dominated designs for a six-storey three-dimensional building with shape memory alloy dampers subjected to earthquake.

  17. Global optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  18. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  19. Net costs of health worker rural incentive packages: an example from the Lao People's Democratic Republic.

    PubMed

    Keuffel, Eric; Jaskiewicz, Wanda; Paphassarang, Chanthakhath; Tulenko, Kate

    2013-11-01

    Many developing countries are examining whether to institute incentive packages that increase the share of health workers who opt to locate in rural settings; however, uncertainty exists with respect to the expected net cost (or benefit) from these packages. We utilize the findings from the discrete choice experiment surveys applied to students training to be health professionals and costing analyses in Lao People's Democratic Republic to model the anticipated effect of incentive packages on new worker location decisions and direct costs. Incorporating evidence on health worker density and health outcomes, we then estimate the expected 5-year net cost (or benefit) of each incentive packages for 3 health worker cadres--physicians, nurses/midwives, and medical assistants. Under base case assumptions, the optimal incentive package for each cadre produced a 5-year net benefit (maximum net benefit for physicians: US$ 44,000; nurses/midwives: US$ 5.6 million; medical assistants: US$ 485,000). After accounting for health effects, the expected net cost of select incentive packages would be substantially less than the original estimate of direct costs. In the case of Lao People's Democratic Republic, incentive packages that do not invest in capital-intensive components generally should produce larger net benefits. Combining discrete choice experiment surveys, costing surveys and cost-benefit analysis methods may be replicated by other developing countries to calculate whether health worker incentive packages are viable policy options.

  20. Constraints on the evolution of phenotypic plasticity: limits and costs of phenotype and plasticity

    PubMed Central

    Murren, C J; Auld, J R; Callahan, H; Ghalambor, C K; Handelsman, C A; Heskel, M A; Kingsolver, J G; Maclean, H J; Masel, J; Maughan, H; Pfennig, D W; Relyea, R A; Seiter, S; Snell-Rood, E; Steiner, U K; Schlichting, C D

    2015-01-01

    Phenotypic plasticity is ubiquitous and generally regarded as a key mechanism for enabling organisms to survive in the face of environmental change. Because no organism is infinitely or ideally plastic, theory suggests that there must be limits (for example, the lack of ability to produce an optimal trait) to the evolution of phenotypic plasticity, or that plasticity may have inherent significant costs. Yet numerous experimental studies have not detected widespread costs. Explicitly differentiating plasticity costs from phenotype costs, we re-evaluate fundamental questions of the limits to the evolution of plasticity and of generalists vs specialists. We advocate for the view that relaxed selection and variable selection intensities are likely more important constraints to the evolution of plasticity than the costs of plasticity. Some forms of plasticity, such as learning, may be inherently costly. In addition, we examine opportunities to offset costs of phenotypes through ontogeny, amelioration of phenotypic costs across environments, and the condition-dependent hypothesis. We propose avenues of further inquiry in the limits of plasticity using new and classic methods of ecological parameterization, phylogenetics and omics in the context of answering questions on the constraints of plasticity. Given plasticity's key role in coping with environmental change, approaches spanning the spectrum from applied to basic will greatly enrich our understanding of the evolution of plasticity and resolve our understanding of limits. PMID:25690179

  1. Fairness in optimizing bus-crew scheduling process.

    PubMed

    Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei

    2017-01-01

    This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.

  2. Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Story, George

    2015-01-01

    Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.

  3. Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Story, George

    2014-01-01

    Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.

  4. Eigenmode computation of cavities with perturbed geometry using matrix perturbation methods applied on generalized eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gorgizadeh, Shahnam; Flisgen, Thomas; van Rienen, Ursula

    2018-07-01

    Generalized eigenvalue problems are standard problems in computational sciences. They may arise in electromagnetic fields from the discretization of the Helmholtz equation by for example the finite element method (FEM). Geometrical perturbations of the structure under concern lead to a new generalized eigenvalue problems with different system matrices. Geometrical perturbations may arise by manufacturing tolerances, harsh operating conditions or during shape optimization. Directly solving the eigenvalue problem for each perturbation is computationally costly. The perturbed eigenpairs can be approximated using eigenpair derivatives. Two common approaches for the calculation of eigenpair derivatives, namely modal superposition method and direct algebraic methods, are discussed in this paper. Based on the direct algebraic methods an iterative algorithm is developed for efficiently calculating the eigenvalues and eigenvectors of the perturbed geometry from the eigenvalues and eigenvectors of the unperturbed geometry.

  5. Hybrid preconditioning for iterative diagonalization of ill-conditioned generalized eigenvalue problems in electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu

    2013-12-15

    The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less

  6. A duality framework for stochastic optimal control of complex systems

    DOE PAGES

    Malikopoulos, Andreas A.

    2016-01-01

    In this study, we address the problem of minimizing the long-run expected average cost of a complex system consisting of interactive subsystems. We formulate a multiobjective optimization problem of the one-stage expected costs of the subsystems and provide a duality framework to prove that the control policy yielding the Pareto optimal solution minimizes the average cost criterion of the system. We provide the conditions of existence and a geometric interpretation of the solution. For practical situations having constraints consistent with those studied here, our results imply that the Pareto control policy may be of value when we seek to derivemore » online the optimal control policy in complex systems.« less

  7. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  8. Optimizing sterilization logistics in hospitals.

    PubMed

    van de Klundert, Joris; Muls, Philippe; Schadd, Maarten

    2008-03-01

    This paper deals with the optimization of the flow of sterile instruments in hospitals which takes place between the sterilization department and the operating theatre. This topic is especially of interest in view of the current attempts of hospitals to cut cost by outsourcing sterilization tasks. Oftentimes, outsourcing implies placing the sterilization unit at a larger distance, hence introducing a longer logistic loop, which may result in lower instrument availability, and higher cost. This paper discusses the optimization problems that have to be solved when redesigning processes so as to improve material availability and reduce cost. We consider changing the logistic management principles, use of visibility information, and optimizing the composition of the nets of sterile materials.

  9. Optimal Design and Operation of Permanent Irrigation Systems

    NASA Astrophysics Data System (ADS)

    Oron, Gideon; Walker, Wynn R.

    1981-01-01

    Solid-set pressurized irrigation system design and operation are studied with optimization techniques to determine the minimum cost distribution system. The principle of the analysis is to divide the irrigation system into subunits in such a manner that the trade-offs among energy, piping, and equipment costs are selected at the minimum cost point. The optimization procedure involves a nonlinear, mixed integer approach capable of achieving a variety of optimal solutions leading to significant conclusions with regard to the design and operation of the system. Factors investigated include field geometry, the effect of the pressure head, consumptive use rates, a smaller flow rate in the pipe system, and outlet (sprinkler or emitter) discharge.

  10. PrEP as a feature in the optimal landscape of combination HIV prevention in sub-Saharan Africa

    PubMed Central

    McGillen, Jessica B; Anderson, Sarah-Jane; Hallett, Timothy B

    2016-01-01

    Introduction The new WHO guidelines recommend offering pre-exposure prophylaxis (PrEP) to people who are at substantial risk of HIV infection. However, where PrEP should be prioritised, and for which population groups, remains an open question. The HIV landscape in sub-Saharan Africa features limited prevention resources, multiple options for achieving cost saving, and epidemic heterogeneity. This paper examines what role PrEP should play in optimal prevention in this complex and dynamic landscape. Methods We use a model that was previously developed to capture subnational HIV transmission in sub-Saharan Africa. With this model, we can consider how prevention funds could be distributed across and within countries throughout sub-Saharan Africa to enable optimal HIV prevention (that is, avert the greatest number of infections for the lowest cost). Here, we focus on PrEP to elucidate where, and to whom, it would optimally be offered in portfolios of interventions (alongside voluntary medical male circumcision, treatment as prevention, and behaviour change communication). Over a range of continental expenditure levels, we use our model to explore prevention patterns that incorporate PrEP, exclude PrEP, or implement PrEP according to a fixed incidence threshold. Results At low-to-moderate levels of total prevention expenditure, we find that the optimal intervention portfolios would include PrEP in only a few regions and primarily for female sex workers (FSW). Prioritisation of PrEP would expand with increasing total expenditure, such that the optimal prevention portfolios would offer PrEP in more subnational regions and increasingly for men who have sex with men (MSM) and the lower incidence general population. The marginal benefit of including PrEP among the available interventions increases with overall expenditure by up to 14% (relative to excluding PrEP). The minimum baseline incidence for the optimal offer of PrEP declines for all population groups as expenditure increases. We find that using a fixed incidence benchmark to guide PrEP decisions would incur considerable losses in impact (up to 7%) compared with an approach that uses PrEP more flexibly in light of prevailing budget conditions. Conclusions Our findings suggest that, for an optimal distribution of prevention resources, choices of whether to implement PrEP in subnational regions should depend on the scope for impact of other possible interventions, local incidence in population groups, and total resources available. If prevention funding were to become restricted in the future, it may be suboptimal to use PrEP according to a fixed incidence benchmark, and other prevention modalities may be more cost-effective. In contrast, expansions in funding could permit PrEP to be used to its full potential in epidemiologically driven prevention portfolios and thereby enable a more cost-effective HIV response across Africa. PMID:27760682

  11. Route optimization as an instrument to improve animal welfare and economics in pre-slaughter logistics.

    PubMed

    Frisk, Mikael; Jonsson, Annie; Sellman, Stefan; Flisberg, Patrik; Rönnqvist, Mikael; Wennergren, Uno

    2018-01-01

    Each year, more than three million animals are transported from farms to abattoirs in Sweden. Animal transport is related to economic and environmental costs and a negative impact on animal welfare. Time and the number of pick-up stops between farms and abattoirs are two key parameters for animal welfare. Both are highly dependent on efficient and qualitative transportation planning, which may be difficult if done manually. We have examined the benefits of using route optimization in cattle transportation planning. To simulate the effects of various planning time windows and transportation time regulations and number of pick-up stops along each route, we have used data that represent one year of cattle transport. Our optimization model is a development of a model used in forestry transport that solves a general pick-up and delivery vehicle routing problem. The objective is to minimize transportation costs. We have shown that the length of the planning time window has a significant impact on the animal transport time, the total driving time and the total distance driven; these parameters that will not only affect animal welfare but also affect the economy and environment in the pre-slaughter logistic chain. In addition, we have shown that changes in animal transportation regulations, such as minimizing the number of allowed pick-up stops on each route or minimizing animal transportation time, will have positive effects on animal welfare measured in transportation hours and number of pick-up stops. However, this leads to an increase in working time and driven distances, leading to higher transportation costs for the transport and negative environmental impact.

  12. An enhanced artificial bee colony algorithm (EABC) for solving dispatching of hydro-thermal system (DHTS) problem

    PubMed Central

    Yu, Yi; Hu, Binqi; Liu, Xinglong

    2018-01-01

    The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC) can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm’s performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms. PMID:29324743

  13. Route optimization as an instrument to improve animal welfare and economics in pre-slaughter logistics

    PubMed Central

    2018-01-01

    Each year, more than three million animals are transported from farms to abattoirs in Sweden. Animal transport is related to economic and environmental costs and a negative impact on animal welfare. Time and the number of pick-up stops between farms and abattoirs are two key parameters for animal welfare. Both are highly dependent on efficient and qualitative transportation planning, which may be difficult if done manually. We have examined the benefits of using route optimization in cattle transportation planning. To simulate the effects of various planning time windows and transportation time regulations and number of pick-up stops along each route, we have used data that represent one year of cattle transport. Our optimization model is a development of a model used in forestry transport that solves a general pick-up and delivery vehicle routing problem. The objective is to minimize transportation costs. We have shown that the length of the planning time window has a significant impact on the animal transport time, the total driving time and the total distance driven; these parameters that will not only affect animal welfare but also affect the economy and environment in the pre-slaughter logistic chain. In addition, we have shown that changes in animal transportation regulations, such as minimizing the number of allowed pick-up stops on each route or minimizing animal transportation time, will have positive effects on animal welfare measured in transportation hours and number of pick-up stops. However, this leads to an increase in working time and driven distances, leading to higher transportation costs for the transport and negative environmental impact. PMID:29513704

  14. An enhanced artificial bee colony algorithm (EABC) for solving dispatching of hydro-thermal system (DHTS) problem.

    PubMed

    Yu, Yi; Wu, Yonggang; Hu, Binqi; Liu, Xinglong

    2018-01-01

    The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC) can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm's performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms.

  15. Waste collection multi objective model with real time traceability data.

    PubMed

    Faccio, Maurizio; Persona, Alessandro; Zanin, Giorgia

    2011-12-01

    Waste collection is a highly visible municipal service that involves large expenditures and difficult operational problems, plus it is expensive to operate in terms of investment costs (i.e. vehicles fleet), operational costs (i.e. fuel, maintenances) and environmental costs (i.e. emissions, noise and traffic congestions). Modern traceability devices, like volumetric sensors, identification RFID (Radio Frequency Identification) systems, GPRS (General Packet Radio Service) and GPS (Global Positioning System) technology, permit to obtain data in real time, which is fundamental to implement an efficient and innovative waste collection routing model. The basic idea is that knowing the real time data of each vehicle and the real time replenishment level at each bin makes it possible to decide, in function of the waste generation pattern, what bin should be emptied and what should not, optimizing different aspects like the total covered distance, the necessary number of vehicles and the environmental impact. This paper describes a framework about the traceability technology available in the optimization of solid waste collection, and introduces an innovative vehicle routing model integrated with the real time traceability data, starting the application in an Italian city of about 100,000 inhabitants. The model is tested and validated using simulation and an economical feasibility study is reported at the end of the paper. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Application of a territorial-based filtering algorithm in turbomachinery blade design optimization

    NASA Astrophysics Data System (ADS)

    Bahrami, Salman; Khelghatibana, Maryam; Tribes, Christophe; Yi Lo, Suk; von Fellenberg, Sven; Trépanier, Jean-Yves; Guibault, François

    2017-02-01

    A territorial-based filtering algorithm (TBFA) is proposed as an integration tool in a multi-level design optimization methodology. The design evaluation burden is split between low- and high-cost levels in order to properly balance the cost and required accuracy in different design stages, based on the characteristics and requirements of the case at hand. TBFA is in charge of connecting those levels by selecting a given number of geometrically different promising solutions from the low-cost level to be evaluated in the high-cost level. Two test case studies, a Francis runner and a transonic fan rotor, have demonstrated the robustness and functionality of TBFA in real industrial optimization problems.

  17. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    NASA Astrophysics Data System (ADS)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  18. Optimizing Aircraft Trajectories with Multiple Cruise Altitudes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Sridhar, Banavar; Grabbe, Shon

    2014-01-01

    This study develops a trajectory optimization algorithm for approximately minimizing aircraft travel time and fuel burn by combining a method for computing minimum-time routes in winds on multiple horizontal planes, and an aircraft fuel burn model for generating fuel-optimal vertical profiles. It is applied to assess the potential benefits of flying user-preferred routes for commercial cargo flights operating between Anchorage, Alaska and major airports in Asia and the contiguous United States. Flying wind optimal trajectories with a fuel-optimal vertical profile reduces average fuel burn of international flights cruising at a single altitude by 1-3 percent. The potential fuel savings of performing en-route step climbs are not significant for many shorter domestic cargo flights that have only one step climb. Wind-optimal trajectories reduce fuel burn and travel time relative to the flight plan route by up to 3 percent for the domestic cargo flights. However, for trans-oceanic traffic, the fuel burn savings could be as much as 10 percent. The actual savings in operations will vary from the simulation results due to differences in the aircraft models and user defined cost indices. In general, the savings are proportional to trip length, and depend on the en-route wind conditions and aircraft types.

  19. A New Distributed Optimization for Community Microgrids Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starke, Michael R; Tomsovic, Kevin

    This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling,more » energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.« less

  20. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    PubMed

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.

Top