NASA Technical Reports Server (NTRS)
1991-01-01
Seagull Technology, Inc., Sunnyvale, CA, produced a computer program under a Langley Research Center Small Business Innovation Research (SBIR) grant called STAFPLAN (Seagull Technology Advanced Flight Plan) that plans optimal trajectory routes for small to medium sized airlines to minimize direct operating costs while complying with various airline operating constraints. STAFPLAN incorporates four input databases, weather, route data, aircraft performance, and flight-specific information (times, payload, crew, fuel cost) to provide the correct amount of fuel optimal cruise altitude, climb and descent points, optimal cruise speed, and flight path.
Kleczkowski, Adam; Oleś, Katarzyna; Gudowska-Nowak, Ewa; Gilligan, Christopher A.
2012-01-01
We present a combined epidemiological and economic model for control of diseases spreading on local and small-world networks. The disease is characterized by a pre-symptomatic infectious stage that makes detection and control of cases more difficult. The effectiveness of local (ring-vaccination or culling) and global control strategies is analysed by comparing the net present values of the combined cost of preventive treatment and illness. The optimal strategy is then selected by minimizing the total cost of the epidemic. We show that three main strategies emerge, with treating a large number of individuals (global strategy, GS), treating a small number of individuals in a well-defined neighbourhood of a detected case (local strategy) and allowing the disease to spread unchecked (null strategy, NS). The choice of the optimal strategy is governed mainly by a relative cost of palliative and preventive treatments. If the disease spreads within the well-defined neighbourhood, the local strategy is optimal unless the cost of a single vaccine is much higher than the cost associated with hospitalization. In the latter case, it is most cost-effective to refrain from prevention. Destruction of local correlations, either by long-range (small-world) links or by inclusion of many initial foci, expands the range of costs for which the NS is most cost-effective. The GS emerges for the case when the cost of prevention is much lower than the cost of treatment and there is a substantial non-local component in the disease spread. We also show that local treatment is only desirable if the disease spreads on a small-world network with sufficiently few long-range links; otherwise it is optimal to treat globally. In the mean-field case, there are only two optimal solutions, to treat all if the cost of the vaccine is low and to treat nobody if it is high. The basic reproduction ratio, R0, does not depend on the rate of responsive treatment in this case and the disease always invades (but might be stopped afterwards). The details of the local control strategy, and in particular the optimal size of the control neighbourhood, are determined by the epidemiology of the disease. The properties of the pathogen might not be known in advance for emerging diseases, but the broad choice of the strategy can be made based on economic analysis only. PMID:21653570
Kleczkowski, Adam; Oleś, Katarzyna; Gudowska-Nowak, Ewa; Gilligan, Christopher A
2012-01-07
We present a combined epidemiological and economic model for control of diseases spreading on local and small-world networks. The disease is characterized by a pre-symptomatic infectious stage that makes detection and control of cases more difficult. The effectiveness of local (ring-vaccination or culling) and global control strategies is analysed by comparing the net present values of the combined cost of preventive treatment and illness. The optimal strategy is then selected by minimizing the total cost of the epidemic. We show that three main strategies emerge, with treating a large number of individuals (global strategy, GS), treating a small number of individuals in a well-defined neighbourhood of a detected case (local strategy) and allowing the disease to spread unchecked (null strategy, NS). The choice of the optimal strategy is governed mainly by a relative cost of palliative and preventive treatments. If the disease spreads within the well-defined neighbourhood, the local strategy is optimal unless the cost of a single vaccine is much higher than the cost associated with hospitalization. In the latter case, it is most cost-effective to refrain from prevention. Destruction of local correlations, either by long-range (small-world) links or by inclusion of many initial foci, expands the range of costs for which the NS is most cost-effective. The GS emerges for the case when the cost of prevention is much lower than the cost of treatment and there is a substantial non-local component in the disease spread. We also show that local treatment is only desirable if the disease spreads on a small-world network with sufficiently few long-range links; otherwise it is optimal to treat globally. In the mean-field case, there are only two optimal solutions, to treat all if the cost of the vaccine is low and to treat nobody if it is high. The basic reproduction ratio, R(0), does not depend on the rate of responsive treatment in this case and the disease always invades (but might be stopped afterwards). The details of the local control strategy, and in particular the optimal size of the control neighbourhood, are determined by the epidemiology of the disease. The properties of the pathogen might not be known in advance for emerging diseases, but the broad choice of the strategy can be made based on economic analysis only.
Small, Low Cost, Launch Capability Development
NASA Technical Reports Server (NTRS)
Brown, Thomas
2014-01-01
A recent explosion in nano-sat, small-sat, and university class payloads has been driven by low cost electronics and sensors, wide component availability, as well as low cost, miniature computational capability and open source code. Increasing numbers of these very small spacecraft are being launched as secondary payloads, dramatically decreasing costs, and allowing greater access to operations and experimentation using actual space flight systems. While manifesting as a secondary payload provides inexpensive rides to orbit, these arrangements also have certain limitations. Small, secondary payloads are typically included with very limited payload accommodations, supported on a non interference basis (to the prime payload), and are delivered to orbital conditions driven by the primary launch customer. Integration of propulsion systems or other hazardous capabilities will further complicate secondary launch arrangements, and accommodation requirements. The National Aeronautics and Space Administration's Marshall Space Flight Center has begun work on the development of small, low cost launch system concepts that could provide dedicated, affordable launch alternatives to small, high risk university type payloads and spacecraft. These efforts include development of small propulsion systems and highly optimized structural efficiency, utilizing modern advanced manufacturing techniques. This paper outlines the plans and accomplishments of these efforts and investigates opportunities for truly revolutionary reductions in launch and operations costs. Both evolution of existing sounding rocket systems to orbital delivery, and the development of clean sheet, optimized small launch systems are addressed.
Small worlds in space: Synchronization, spatial and relational modularity
NASA Astrophysics Data System (ADS)
Brede, M.
2010-06-01
In this letter we investigate networks that have been optimized to realize a trade-off between enhanced synchronization and cost of wire to connect the nodes in space. Analyzing the evolved arrangement of nodes in space and their corresponding network topology, a class of small-world networks characterized by spatial and network modularity is found. More precisely, for low cost of wire optimal configurations are characterized by a division of nodes into two spatial groups with maximum distance from each other, whereas network modularity is low. For high cost of wire, the nodes organize into several distinct groups in space that correspond to network modules connected on a ring. In between, spatially and relationally modular small-world networks are found.
Coordinated and uncoordinated optimization of networks
NASA Astrophysics Data System (ADS)
Brede, Markus
2010-06-01
In this paper, we consider spatial networks that realize a balance between an infrastructure cost (the cost of wire needed to connect the network in space) and communication efficiency, measured by average shortest path length. A global optimization procedure yields network topologies in which this balance is optimized. These are compared with network topologies generated by a competitive process in which each node strives to optimize its own cost-communication balance. Three phases are observed in globally optimal configurations for different cost-communication trade offs: (i) regular small worlds, (ii) starlike networks, and (iii) trees with a center of interconnected hubs. In the latter regime, i.e., for very expensive wire, power laws in the link length distributions P(w)∝w-α are found, which can be explained by a hierarchical organization of the networks. In contrast, in the local optimization process the presence of sharp transitions between different network regimes depends on the dimension of the underlying space. Whereas for d=∞ sharp transitions between fully connected networks, regular small worlds, and highly cliquish periphery-core networks are found, for d=1 sharp transitions are absent and the power law behavior in the link length distribution persists over a much wider range of link cost parameters. The measured power law exponents are in agreement with the hypothesis that the locally optimized networks consist of multiple overlapping suboptimal hierarchical trees.
NASA Astrophysics Data System (ADS)
Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar
2018-07-01
The present study aims at optimizing the heat transmission parameters such as Nusselt number and friction factor in a small double pipe heat exchanger equipped with rotating spiral tapes cut as triangles and filled with aluminum oxide nanofluid. The effects of Reynolds number, twist ratio (y/w), rotating twisted tape and concentration (w%) on the Nusselt number and friction factor are also investigated. The central composite design and the response surface methodology are used for evaluating the responses necessary for optimization. According to the optimal curves, the most optimized value obtained for Nusselt number and friction factor was 146.6675 and 0.06020, respectively. Finally, an appropriate correlation is also provided to achieve the optimal model of the minimum cost. Optimization results showed that the cost has decreased in the best case.
NASA Astrophysics Data System (ADS)
Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar
2018-02-01
The present study aims at optimizing the heat transmission parameters such as Nusselt number and friction factor in a small double pipe heat exchanger equipped with rotating spiral tapes cut as triangles and filled with aluminum oxide nanofluid. The effects of Reynolds number, twist ratio (y/w), rotating twisted tape and concentration (w%) on the Nusselt number and friction factor are also investigated. The central composite design and the response surface methodology are used for evaluating the responses necessary for optimization. According to the optimal curves, the most optimized value obtained for Nusselt number and friction factor was 146.6675 and 0.06020, respectively. Finally, an appropriate correlation is also provided to achieve the optimal model of the minimum cost. Optimization results showed that the cost has decreased in the best case.
NASA Astrophysics Data System (ADS)
Yang, Y.; Chui, T. F. M.
2016-12-01
Green infrastructure (GI) is identified as sustainable and environmentally friendly alternatives to the conventional grey stormwater infrastructure. Commonly used GI (e.g. green roof, bioretention, porous pavement) can provide multifunctional benefits, e.g. mitigation of urban heat island effects, improvements in air quality. Therefore, to optimize the design of GI and grey drainage infrastructure, it is essential to account for their benefits together with the costs. In this study, a comprehensive simulation-optimization modelling framework that considers the economic and hydro-environmental aspects of GI and grey infrastructure for small urban catchment applications is developed. Several modelling tools (i.e., EPA SWMM model, the WERF BMP and LID Whole Life Cycle Cost Modelling Tools) and optimization solvers are coupled together to assess the life-cycle cost-effectiveness of GI and grey infrastructure, and to further develop optimal stormwater drainage solutions. A typical residential lot in New York City is examined as a case study. The life-cycle cost-effectiveness of various GI and grey infrastructure are first examined at different investment levels. The results together with the catchment parameters are then provided to the optimization solvers, to derive the optimal investment and contributing area of each type of the stormwater controls. The relationship between the investment and optimized environmental benefit is found to be nonlinear. The optimized drainage solutions demonstrate that grey infrastructure is preferred at low total investments while more GI should be adopted at high investments. The sensitivity of the optimized solutions to the prices the stormwater controls is evaluated and is found to be highly associated with their utilizations in the base optimization case. The overall simulation-optimization framework can be easily applied to other sites world-wide, and to be further developed into powerful decision support systems.
Heliostat cost optimization study
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus
2016-05-01
This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.
Technical and cost advantages of silicon carbide telescopes for small-satellite imaging applications
NASA Astrophysics Data System (ADS)
Kasunic, Keith J.; Aikens, Dave; Szwabowski, Dean; Ragan, Chip; Tinker, Flemming
2017-09-01
Small satellites ("SmallSats") are a growing segment of the Earth imaging and remote sensing market. Designed to be relatively low cost and with performance tailored to specific end-use applications, they are driving changes in optical telescope assembly (OTA) requirements. OTAs implemented in silicon carbide (SiC) provide performance advantages for space applications but have been predominately limited to large programs. A new generation of lightweight and thermally-stable designs is becoming commercially available, expanding the application of SiC to small satellites. This paper reviews the cost and technical advantages of an OTA designed using SiC for small satellite platforms. Taking into account faceplate fabrication quilting and surface distortion after gravity release, an optimized open-back SiC design with a lightweighting of 70% for a 125-mm SmallSat-class primary mirror has an estimated mass area density of 2.8 kg/m2 and an aspect ratio of 40:1. In addition, the thermally-induced surface error of such optimized designs is estimated at λ/150 RMS per watt of absorbed power. Cost advantages of SiC include reductions in launch mass, thermal-management infrastructure, and manufacturing time based on allowable assembly tolerances.
Optimization study of small-scale solar membrane distillation desalination systems (s-SMDDS).
Chang, Hsuan; Chang, Cheng-Liang; Hung, Chen-Yu; Cheng, Tung-Wen; Ho, Chii-Dong
2014-11-24
Membrane distillation (MD), which can utilize low-grade thermal energy, has been extensively studied for desalination. By incorporating solar thermal energy, the solar membrane distillation desalination system (SMDDS) is a potential technology for resolving energy and water resource problems. Small-scale SMDDS (s-SMDDS) is an attractive and viable option for the production of fresh water for small communities in remote arid areas. The minimum cost design and operation of s-SMDDS are determined by a systematic method, which involves a pseudo-steady-state approach for equipment sizing and dynamic optimization using overall system mathematical models. Two s-SMDDS employing an air gap membrane distillation module with membrane areas of 11.5 m(2) and 23 m(2) are analyzed. The lowest water production costs are $5.92/m(3) and $5.16/m(3) for water production rates of 500 kg/day and 1000 kg/day, respectively. For these two optimal cases, the performance ratios are 0.85 and 0.91; the recovery ratios are 4.07% and 4.57%. The effect of membrane characteristics on the production cost is investigated. For the commercial membrane employed in this study, the increase of the membrane mass transfer coefficient up to two times is beneficial for cost reduction.
Optimization Study of Small-Scale Solar Membrane Distillation Desalination Systems (s-SMDDS)
Chang, Hsuan; Chang, Cheng-Liang; Hung, Chen-Yu; Cheng, Tung-Wen; Ho, Chii-Dong
2014-01-01
Membrane distillation (MD), which can utilize low-grade thermal energy, has been extensively studied for desalination. By incorporating solar thermal energy, the solar membrane distillation desalination system (SMDDS) is a potential technology for resolving energy and water resource problems. Small-scale SMDDS (s-SMDDS) is an attractive and viable option for the production of fresh water for small communities in remote arid areas. The minimum cost design and operation of s-SMDDS are determined by a systematic method, which involves a pseudo-steady-state approach for equipment sizing and dynamic optimization using overall system mathematical models. Two s-SMDDS employing an air gap membrane distillation module with membrane areas of 11.5 m2 and 23 m2 are analyzed. The lowest water production costs are $5.92/m3 and $5.16/m3 for water production rates of 500 kg/day and 1000 kg/day, respectively. For these two optimal cases, the performance ratios are 0.85 and 0.91; the recovery ratios are 4.07% and 4.57%. The effect of membrane characteristics on the production cost is investigated. For the commercial membrane employed in this study, the increase of the membrane mass transfer coefficient up to two times is beneficial for cost reduction. PMID:25421065
Lee, Vivian W Y; Schwander, Bjoern; Lee, Victor H F
2014-06-01
To compare the effectiveness and cost-effectiveness of erlotinib versus gefitinib as first-line treatment of epidermal growth factor receptor-activating mutation-positive non-small-cell lung cancer patients. DESIGN. Indirect treatment comparison and a cost-effectiveness assessment. Hong Kong. Those having epidermal growth factor receptor-activating mutation-positive non-small-cell lung cancer. Erlotinib versus gefitinib use was compared on the basis of four relevant Asian phase-III randomised controlled trials: one for erlotinib (OPTIMAL) and three for gefitinib (IPASS; NEJGSG; WJTOG). The cost-effectiveness assessment model simulates the transition between the health states: progression-free survival, progression, and death over a lifetime horizon. The World Health Organization criterion (incremental cost-effectiveness ratio <3 times of gross domestic product/capita:
Performance Management and Optimization of Semiconductor Design Projects
NASA Astrophysics Data System (ADS)
Hinrichs, Neele; Olbrich, Markus; Barke, Erich
2010-06-01
The semiconductor industry is characterized by fast technological changes and small time-to-market windows. Improving productivity is the key factor to stand up to the competitors and thus successfully persist in the market. In this paper a Performance Management System for analyzing, optimizing and evaluating chip design projects is presented. A task graph representation is used to optimize the design process regarding time, cost and workload of resources. Key Performance Indicators are defined in the main areas cost, profit, resources, process and technical output to appraise the project.
Watershed Management Optimization Support Tool v3
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed context that is, accou...
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
Watershed Management Optimization Support Tool (WMOST) v3: User Guide
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed context that is, accou...
Watershed Management Optimization Support Tool (WMOST) v3: Theoretical Documentation
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed context, accounting fo...
NASA Astrophysics Data System (ADS)
Chaudhuri, Anirban
Global optimization based on expensive and time consuming simulations or experiments usually cannot be carried out to convergence, but must be stopped because of time constraints, or because the cost of the additional function evaluations exceeds the benefits of improving the objective(s). This dissertation sets to explore the implications of such budget and time constraints on the balance between exploration and exploitation and the decision of when to stop. Three different aspects are considered in terms of their effects on the balance between exploration and exploitation: 1) history of optimization, 2) fixed evaluation budget, and 3) cost as a part of objective function. To this end, this research develops modifications to the surrogate-based optimization technique, Efficient Global Optimization algorithm, that controls better the balance between exploration and exploitation, and stopping criteria facilitated by these modifications. Then the focus shifts to examining experimental optimization, which shares the issues of cost and time constraints. Through a study on optimization of thrust and power for a small flapping wing for micro air vehicles, important differences and similarities between experimental and simulation-based optimization are identified. The most important difference is that reduction of noise in experiments becomes a major time and cost issue, and a second difference is that parallelism as a way to cut cost is more challenging. The experimental optimization reveals the tendency of the surrogate to display optimistic bias near the surrogate optimum, and this tendency is then verified to also occur in simulation based optimization.
Environmental tipping points significantly affect the cost-benefit assessment of climate policies.
Cai, Yongyang; Judd, Kenneth L; Lenton, Timothy M; Lontzek, Thomas S; Narita, Daiju
2015-04-14
Most current cost-benefit analyses of climate change policies suggest an optimal global climate policy that is significantly less stringent than the level required to meet the internationally agreed 2 °C target. This is partly because the sum of estimated economic damage of climate change across various sectors, such as energy use and changes in agricultural production, results in only a small economic loss or even a small economic gain in the gross world product under predicted levels of climate change. However, those cost-benefit analyses rarely take account of environmental tipping points leading to abrupt and irreversible impacts on market and nonmarket goods and services, including those provided by the climate and by ecosystems. Here we show that including environmental tipping point impacts in a stochastic dynamic integrated assessment model profoundly alters cost-benefit assessment of global climate policy. The risk of a tipping point, even if it only has nonmarket impacts, could substantially increase the present optimal carbon tax. For example, a risk of only 5% loss in nonmarket goods that occurs with a 5% annual probability at 4 °C increase of the global surface temperature causes an immediate two-thirds increase in optimal carbon tax. If the tipping point also has a 5% impact on market goods, the optimal carbon tax increases by more than a factor of 3. Hence existing cost-benefit assessments of global climate policy may be significantly underestimating the needs for controlling climate change.
Optimal Portfolio Selection Under Concave Price Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn
2013-06-15
In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less
Asset Prices and Trading Volume under Fixed Transactions Costs.
ERIC Educational Resources Information Center
Lo, Andrew W.; Mamaysky, Harry; Wang, Jiang
2004-01-01
We propose a dynamic equilibrium model of asset prices and trading volume when agents face fixed transactions costs. We show that even small fixed costs can give rise to large "no-trade" regions for each agent's optimal trading policy. The inability to trade more frequently reduces the agents' asset demand and in equilibrium gives rise to a…
Optimal short-range trajectories for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, G.L.; Erzberger, H.
1982-12-01
An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less
NASA Astrophysics Data System (ADS)
Li, Peng; Wu, Di
2018-01-01
Two competing approaches have been developed over the years for multi-echelon inventory system optimization, stochastic-service approach (SSA) and guaranteed-service approach (GSA). Although they solve the same inventory policy optimization problem in their core, they make different assumptions with regard to the role of safety stock. This paper provides a detailed comparison of the two approaches by considering operating flexibility costs in the optimization of (R, Q) policies for a continuous review serial inventory system. The results indicate the GSA model is more efficiency in solving the complicated inventory problem in terms of the computation time, and the cost difference of the two approaches is quite small.
System design optimization for stand-alone photovoltaic systems sizing by using superstructure model
NASA Astrophysics Data System (ADS)
Azau, M. A. M.; Jaafar, S.; Samsudin, K.
2013-06-01
Although the photovoltaic (PV) systems have been increasingly installed as an alternative and renewable green power generation, the initial set up cost, maintenance cost and equipment mismatch are some of the key issues that slows down the installation in small household. This paper presents the design optimization of stand-alone photovoltaic systems using superstructure model where all possible types of technology of the equipment are captured and life cycle cost analysis is formulated as a mixed integer programming (MIP). A model for investment planning of power generation and long-term decision model are developed in order to help the system engineer to build a cost effective system.
NASA Astrophysics Data System (ADS)
Hori, Toshikazu; Mohri, Yoshiyuki; Matsushima, Kenichi; Ariyoshi, Mitsuru
In recent years the increase in the number of heavy rainfall occurrences such as through unpredictable cloudbursts have resulted in the safety of the embankments of small earth dams needing to be improved. However, the severe financial condition of the government and local autonomous bodies necessitate the cost of improving them to be reduced. This study concerns the development of a method of evaluating the life cycle cost of small earth dams considered to pose a risk and in order to improve the safety of the downstream areas of small earth dams at minimal cost. Use of a safety evaluation method that is based on a combination of runoff analysis, saturated and unsaturated seepage analysis, and slope stability analysis enables the probability of a dam breach and its life cycle cost with the risk of heavy rainfall taken into account to be calculated. Moreover, use of the life cycle cost evaluation method will lead to the development of a technique for selecting the method of the optimal improvement or countermeasures against heavy rainfall.
Enabling Dedicated, Affordable Space Access Through Aggressive Technology Maturation
NASA Technical Reports Server (NTRS)
Jones, Jonathan; Kibbey, Tim; Lampton, Pat; Brown, Thomas
2014-01-01
A recent explosion in nano-sat, small-sat, and university class payloads has been driven by low cost electronics and sensors, wide component availability, as well as low cost, miniature computational capability and open source code. Increasing numbers of these very small spacecraft are being launched as secondary payloads, dramatically decreasing costs, and allowing greater access to operations and experimentation using actual space flight systems. While manifesting as a secondary payload provides inexpensive rides to orbit, these arrangements also have certain limitations. Small, secondary payloads are typically included with very limited payload accommodations, supported on a non interference basis (to the prime payload), and are delivered to orbital conditions driven by the primary launch customer. Integration of propulsion systems or other hazardous capabilities will further complicate secondary launch arrangements, and accommodation requirements. The National Aeronautics and Space Administration's Marshall Space Flight Center has begun work on the development of small, low cost launch system concepts that could provide dedicated, affordable launch alternatives to small, risk tolerant university type payloads and spacecraft. These efforts include development of small propulsion systems and highly optimized structural efficiency, utilizing modern advanced manufacturing techniques. This paper outlines the plans and accomplishments of these efforts and investigates opportunities for truly revolutionary reductions in launch and operations costs. Both evolution of existing sounding rocket systems to orbital delivery, and the development of clean sheet, optimized small launch systems are addressed. A launch vehicle at the scale and price point which allows developers to take reasonable risks with new propulsion and avionics hardware solutions does not exist today. Establishing this service provides a ride through the proverbial "valley of death" that lies between demonstration in laboratory and flight environments. This effort will provide the framework to mature both on-orbit and earth-to-orbit avionics and propulsion technologies while also providing dedicated, affordable access to LEO for cubesat class payloads.
Distributed Wind Competitiveness Improvement Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Competitiveness Improvement Project (CIP) is a periodic solicitation through the U.S. Department of Energy and its National Renewable Energy Laboratory. The Competitiveness Improvement Project (CIP) is a periodic solicitation through the U.S. Department of Energy and its National Renewable Energy Laboratory. Manufacturers of small and medium wind turbines are awarded cost-shared grants via a competitive process to optimize their designs, develop advanced manufacturing processes, and perform turbine testing. The goals of the CIP are to make wind energy cost competitive with other distributed generation technology and increase the number of wind turbine designs certified to national testing standards. Thismore » fact sheet describes the CIP and funding awarded as part of the project.ufacturers of small and medium wind turbines are awarded cost-shared grants via a competitive process to optimize their designs, develop advanced manufacturing processes, and perform turbine testing. The goals of the CIP are to make wind energy cost competitive with other distributed generation technology and increase the number of wind turbine designs certified to national testing standards. This fact sheet describes the CIP and funding awarded as part of the project.« less
Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2012-03-01
In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.
Optimal spacecraft attitude control using collocation and nonlinear programming
NASA Astrophysics Data System (ADS)
Herman, A. L.; Conway, B. A.
1992-10-01
Direct collocation with nonlinear programming (DCNLP) is employed to find the optimal open-loop control histories for detumbling a disabled satellite. The controls are torques and forces applied to the docking arm and joint and torques applied about the body axes of the OMV. Solutions are obtained for cases in which various constraints are placed on the controls and in which the number of controls is reduced or increased from that considered in Conway and Widhalm (1986). DCLNP works well when applied to the optimal control problem of satellite attitude control. The formulation is straightforward and produces good results in a relatively small amount of time on a Cray X/MP with no a priori information about the optimal solution. The addition of joint acceleration to the controls significantly reduces the control magnitudes and optimal cost. In all cases, the torques and acclerations are modest and the optimal cost is very modest.
An overview of the Douglas Aircraft Company Aeroelastic Design Optimization Program (ADOP)
NASA Technical Reports Server (NTRS)
Dodd, Alan J.
1989-01-01
From a program manager's viewpoint, the history, scope and architecture of a major structural design program at Douglas Aircraft Company called Aeroelastic Design Optimization Program (ADOP) are described. ADOP was originally intended for the rapid, accurate, cost-effective evaluation of relatively small structural models at the advanced design level, resulting in improved proposal competitiveness and avoiding many costly changes later in the design cycle. Before release of the initial version in November 1987, however, the program was expanded to handle very large production-type analyses.
Alarcón, J A; Immink, M D; Méndez, L F
1989-12-01
The present study was conducted as part of an evaluation of the economic and nutritional effects of a crop diversification program for small-scale farmers in the Western highlands of Guatemala. Linear programming models are employed in order to obtain optimal combinations of traditional and non-traditional food crops under different ecological conditions that: a) provide minimum cost diets for auto-consumption, and b) maximize net income and market availability of dietary energy. Data used were generated by means of an agroeconomic survey conducted in 1983 among 726 farming households. Food prices were obtained from the Institute of Agrarian Marketing; data on production costs, from the National Bank of Agricultural Development in Guatemala. The gestation periods for each crop were obtained from three different sources, and then averaged. The results indicated that the optimal cropping pattern for the minimum-cost diets for auto consumption include traditional foods (corn, beans, broad bean, wheat, potato), non-traditional foods (carrots, broccoli, beets) and foods of animal origin (milk, eggs). A significant number of farmers included in the sample did not have sufficient land availability to produce all foods included in the minimum-cost diet. Cropping patterns which maximize net incomes include only non-traditional foods: onions, carrots, broccoli and beets for farmers in the low highland areas, and raddish, broccoli, cauliflower and carrots for farmers in the higher parts. Optimal cropping patterns which maximize market availability of dietary energy include traditional and non-traditional foods; for farmers in the lower areas: wheat, corn, beets, carrots and onions; for farmers in the higher areas: potato, wheat, raddish, carrots and cabbage.
Approaches to eliminate waste and reduce cost for recycling glass.
Chao, Chien-Wen; Liao, Ching-Jong
2011-12-01
In recent years, the issue of environmental protection has received considerable attention. This paper adds to the literature by investigating a scheduling problem in the manufacturing of a glass recycling factory in Taiwan. The objective is to minimize the sum of the total holding cost and loss cost. We first represent the problem as an integer programming (IP) model, and then develop two heuristics based on the IP model to find near-optimal solutions for the problem. To validate the proposed heuristics, comparisons between optimal solutions from the IP model and solutions from the current method are conducted. The comparisons involve two problem sizes, small and large, where the small problems range from 15 to 45 jobs, and the large problems from 50 to 100 jobs. Finally, a genetic algorithm is applied to evaluate the proposed heuristics. Computational experiments show that the proposed heuristics can find good solutions in a reasonable time for the considered problem. Copyright © 2011 Elsevier Ltd. All rights reserved.
Cost effective design and operation of Granular Activated Carbon (GAC) facilities requires the selection of GAC that is optimal for a specific site. Rapid small-scale column tests (RSSCTs) are widely used for GAC assessment due to several advantages, including the ability to simu...
Cost effective design and operation of Granular Activated Carbon (GAC) facilities requires the selection of GAC that is optimal for a specific site. Rapid small-scale column tests (RSSCTs) are widely used for GAC assessment due to several advantages, including the ability to simu...
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of managemen
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management.
COTSAT Small Spacecraft Cost Optimization for Government and Commercial Use
NASA Technical Reports Server (NTRS)
Swank, Aaron J.; Bui, David; Dallara, Christopher; Ghassemieh, Shakib; Hanratty, James; Jackson, Evan; Klupar, Pete; Lindsay, Michael; Ling, Kuok; Mattei, Nicholas;
2009-01-01
Cost Optimized Test of Spacecraft Avionics and Technologies (COTSAT-1) is an ongoing spacecraft research and development project at NASA Ames Research Center (ARC). The prototype spacecraft, also known as CheapSat, is the first of what could potentially be a series of rapidly produced low-cost spacecraft. The COTSAT-1 team is committed to realizing the challenging goal of building a fully functional spacecraft for $500K parts and $2.0M labor. The project's efforts have resulted in significant accomplishments within the scope of a limited budget and schedule. Completion and delivery of the flight hardware to the Engineering Directorate at NASA Ames occurred in February 2009 and a cost effective qualification program is currently under study. The COTSAT-1 spacecraft is now located at NASA Ames Research Center and is awaiting a cost effective launch opportunity. This paper highlights the advancements of the COTSAT-1 spacecraft cost reduction techniques.
Low, slow, small target recognition based on spatial vision network
NASA Astrophysics Data System (ADS)
Cheng, Zhao; Guo, Pei; Qi, Xin
2018-03-01
Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.
Assessment of regional management strategies for controlling seawater intrusion
Reichard, E.G.; Johnson, T.A.
2005-01-01
Simulation-optimization methods, applied with adequate sensitivity tests, can provide useful quantitative guidance for controlling seawater intrusion. This is demonstrated in an application to the West Coast Basin of coastal Los Angeles that considers two management options for improving hydraulic control of seawater intrusion: increased injection into barrier wells and in lieu delivery of surface water to replace current pumpage. For the base-case optimization analysis, assuming constant groundwater demand, in lieu delivery was determined to be most cost effective. Reduced-cost information from the optimization provided guidance for prioritizing locations for in lieu delivery. Model sensitivity to a suite of hydrologic, economic, and policy factors was tested. Raising the imposed average water-level constraint at the hydraulic-control locations resulted in nonlinear increases in cost. Systematic varying of the relative costs of injection and in lieu water yielded a trade-off curve between relative costs and injection/in lieu amounts. Changing the assumed future scenario to one of increasing pumpage in the adjacent Central Basin caused a small increase in the computed costs of seawater intrusion control. Changing the assumed boundary condition representing interaction with an adjacent basin did not affect the optimization results. Reducing the assumed hydraulic conductivity of the main productive aquifer resulted in a large increase in the model-computed cost. Journal of Water Resources Planning and Management ?? ASCE.
The HDAC Inhibitor TSA Ameliorates a Zebrafish Model of Duchenne Muscular Dystrophy.
Johnson, Nathan M; Farr, Gist H; Maves, Lisa
2013-09-17
Zebrafish are an excellent model for Duchenne muscular dystrophy. In particular, zebrafish provide a system for rapid, easy, and low-cost screening of small molecules that can ameliorate muscle damage in dystrophic larvae. Here we identify an optimal anti-sense morpholino cocktail that robustly knocks down zebrafish Dystrophin (dmd-MO). We use two approaches, muscle birefringence and muscle actin expression, to quantify muscle damage and show that the dmd-MO dystrophic phenotype closely resembles the zebrafish dmd mutant phenotype. We then show that the histone deacetylase (HDAC) inhibitor TSA, which has been shown to ameliorate the mdx mouse Duchenne model, can rescue muscle fiber damage in both dmd-MO and dmd mutant larvae. Our study identifies optimal morpholino and phenotypic scoring approaches for dystrophic zebrafish, further enhancing the zebrafish dmd model for rapid and cost-effective small molecule screening.
Optimally stopped variational quantum algorithms
NASA Astrophysics Data System (ADS)
Vinci, Walter; Shabani, Alireza
2018-04-01
Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.
Mixed H(2)/H(sub infinity): Control with output feedback compensators using parameter optimization
NASA Technical Reports Server (NTRS)
Schoemig, Ewald; Ly, Uy-Loi
1992-01-01
Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.
Mixed H2/H(infinity)-Control with an output-feedback compensator using parameter optimization
NASA Technical Reports Server (NTRS)
Schoemig, Ewald; Ly, Uy-Loi
1992-01-01
Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.
Itagaki, Michael W
2015-01-01
Three-dimensional (3D) printing applications in medicine have been limited due to high cost and technical difficulty of creating 3D printed objects. It is not known whether patient-specific, hollow, small-caliber vascular models can be manufactured with 3D printing, and used for small vessel endoluminal testing of devices. Manufacture of anatomically accurate, patient-specific, small-caliber arterial models was attempted using data from a patient's CT scan, free open-source software, and low-cost Internet 3D printing services. Prior to endovascular treatment of a patient with multiple splenic artery aneurysms, a 3D printed model was used preoperatively to test catheter equipment and practice the procedure. A second model was used intraoperatively as a reference. Full-scale plastic models were successfully produced. Testing determined the optimal puncture site for catheter positioning. A guide catheter, base catheter, and microcatheter combination selected during testing was used intraoperatively with success, and the need for repeat angiograms to optimize image orientation was minimized. A difficult and unconventional procedure was successful in treating the aneurysms while preserving splenic function. We conclude that creation of small-caliber vascular models with 3D printing is possible. Free software and low-cost printing services make creation of these models affordable and practical. Models are useful in preoperative planning and intraoperative guidance.
Field-design optimization with triangular heliostat pods
NASA Astrophysics Data System (ADS)
Domínguez-Bravo, Carmen-Ana; Bode, Sebastian-James; Heiming, Gregor; Richter, Pascal; Carrizosa, Emilio; Fernández-Cara, Enrique; Frank, Martin; Gauché, Paul
2016-05-01
In this paper the optimization of a heliostat field with triangular heliostat pods is addressed. The use of structures which allow the combination of several heliostats into a common pod system aims to reduce the high costs associated with the heliostat field and therefore reduces the Levelized Cost of Electricity value. A pattern-based algorithm and two pattern-free algorithms are adapted to handle the field layout problem with triangular heliostat pods. Under the Helio100 project in South Africa, a new small-scale Solar Power Tower plant has been recently constructed. The Helio100 plant has 20 triangular pods (each with 6 heliostats) whose positions follow a linear pattern. The obtained field layouts after optimization are compared against the reference field Helio100.
Tanyimboh, Tiku T; Seyoum, Alemtsehay G
2016-12-01
This article investigates the computational efficiency of constraint handling in multi-objective evolutionary optimization algorithms for water distribution systems. The methodology investigated here encourages the co-existence and simultaneous development including crossbreeding of subpopulations of cost-effective feasible and infeasible solutions based on Pareto dominance. This yields a boundary search approach that also promotes diversity in the gene pool throughout the progress of the optimization by exploiting the full spectrum of non-dominated infeasible solutions. The relative effectiveness of small and moderate population sizes with respect to the number of decision variables is investigated also. The results reveal the optimization algorithm to be efficient, stable and robust. It found optimal and near-optimal solutions reliably and efficiently. The real-world system based optimization problem involved multiple variable head supply nodes, 29 fire-fighting flows, extended period simulation and multiple demand categories including water loss. The least cost solutions found satisfied the flow and pressure requirements consistently. The best solutions achieved indicative savings of 48.1% and 48.2% based on the cost of the pipes in the existing network, for populations of 200 and 1000, respectively. The population of 1000 achieved slightly better results overall. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Small Habitat Commonality Reduces Cost for Human Mars Missions
NASA Technical Reports Server (NTRS)
Griffin, Brand N.; Lepsch, Roger; Martin, John; Howard, Robert; Rucker, Michelle; Zapata, Edgar; McCleskey, Carey; Howe, Scott; Mary, Natalie; Nerren, Philip (Inventor)
2015-01-01
Most view the Apollo Program as expensive. It was. But, a human mission to Mars will be orders of magnitude more difficult and costly. Recently, NASA's Evolvable Mars Campaign (EMC) mapped out a step-wise approach for exploring Mars and the Mars-moon system. It is early in the planning process but because approximately 80% of the total life cycle cost is committed during preliminary design, there is an effort to emphasize cost reduction methods up front. Amongst the options, commonality across small habitat elements shows promise for consolidating the high bow-wave costs of Design, Development, Test and Evaluation (DDT&E) while still accommodating each end-item's functionality. In addition to DDT&E, there are other cost and operations benefits to commonality such as reduced logistics, simplified infrastructure integration and with inter-operability, improved safety and simplified training. These benefits are not without a cost. Some habitats are sub-optimized giving up unique attributes for the benefit of the overall architecture and because the first item sets the course for those to follow, rapidly developing technology may be excluded. The small habitats within the EMC include the pressurized crew cabins for the ascent vehicle,
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
On the Convergence Analysis of the Optimized Gradient Method
Kim, Donghwan; Fessler, Jeffrey A.
2016-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707
NASA Technical Reports Server (NTRS)
Polzien, R. E.; Rodriguez, D.
1981-01-01
Aspects of incorporating a thermal energy transport system (ETS) into a field of parabolic dish collectors for industrial process heat (IPH) applications were investigated. Specific objectives are to: (1) verify the mathematical optimization of pipe diameters and insulation thicknesses calculated by a computer code; (2) verify the cost model for pipe network costs using conventional pipe network construction; (3) develop a design and the associated production costs for incorporating risers and downcomers on a low cost concentrator (LCC); (4) investigate the cost reduction of using unconventional pipe construction technology. The pipe network design and costs for a particular IPH application, specifically solar thermally enhanced oil recovery (STEOR) are analyzed. The application involves the hybrid operation of a solar powered steam generator in conjunction with a steam generator using fossil fuels to generate STEOR steam for wells. It is concluded that the STEOR application provides a baseline pipe network geometry used for optimization studies of pipe diameter and insulation thickness, and for development of comparative cost data, and operating parameters for the design of riser/downcomer modifications to the low cost concentrator.
Optimal estimation and scheduling in aquifer management using the rapid feedback control method
NASA Astrophysics Data System (ADS)
Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric
2017-12-01
Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canavan, G.H.
Optimizations of missile allocation based on linearized exchange equations produce accurate allocations, but the limits of validity of the linearization are not known. These limits are explored in the context of the upload of weapons by one side to initially small, equal forces of vulnerable and survivable weapons. The analysis compares analytic and numerical optimizations and stability induces based on aggregated interactions of the two missile forces, the first and second strikes they could deliver, and they resulting costs. This note discusses the costs and stability indices induced by unilateral uploading of weapons to an initially symmetrical low force configuration.more » These limits are quantified for forces with a few hundred missiles by comparing analytic and numerical optimizations of first strike costs. For forces of 100 vulnerable and 100 survivable missiles on each side, the analytic optimization agrees closely with the numerical solution. For 200 vulnerable and 200 survivable missiles on each side, the analytic optimization agrees with the induces to within about 10%, but disagrees with the allocation of the side with more weapons by about 50%. The disagreement comes from the interaction of the possession of more weapons with the shift of allocation from missiles to value that they induce.« less
Medical therapy v. PCI in stable coronary artery disease: a cost-effectiveness analysis.
Wijeysundera, Harindra C; Tomlinson, George; Ko, Dennis T; Dzavik, Vladimir; Krahn, Murray D
2013-10-01
Percutaneous coronary intervention (PCI) with either drug-eluting stents (DES) or bare metal stents (BMS) reduces angina and repeat procedures compared with optimal medical therapy alone. It remains unclear if these benefits are sufficient to offset their increased costs and small increase in adverse events. Cost utility analysis of initial medical therapy v. PCI with either BMS or DES. . Markov cohort decision model. Data Sources. Propensity-matched observational data from Ontario, Canada, for baseline event rates. Effectiveness and utility data obtained from the published literature, with costs from the Ontario Case Costing Initiative. Patients with stable coronary artery disease, confirmed after angiography, stratified by risk of restenosis based on diabetic status, lesion size, and lesion length. Time Horizon. Lifetime. Perspective. Ontario Ministry of Health and Long Term Care. Interventions. Optimal medical therapy, PCI with BMS or DES. Lifetime costs, quality-adjusted life years (QALYs), and the incremental cost-effectiveness ratio (ICER). of Base Case Analysis. In the overall population, medical therapy had the lowest lifetime costs at $22,952 v. $25,081 and $25,536 for BMS and DES, respectively. Medical therapy had a quality-adjusted life expectancy of 10.1 v. 10.26 QALYs for BMS, producing an ICER of $13,271/QALY. The DES strategy had a quality-adjusted life expectancy of only 10.20 QALYs and was dominated by the BMS strategy. This ranking was consistent in all groups stratified by restenosis risk, except diabetic patients with long lesions in small arteries, in whom DES was cost-effective compared with medical therapy (ICER of $18,826/QALY). Limitations. There is the possibility of residual unobserved confounding. In patients with stable coronary artery disease, an initial BMS strategy is cost-effective.
Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E
2006-03-01
Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.
HOMER® Micropower Optimization Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lilienthal, P.
2005-01-01
NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, R.S.
1989-06-01
For a vehicle operating across arbitrarily-contoured terrain, finding the most fuel-efficient route between two points can be viewed as a high-level global path-planning problem with traversal costs and stability dependent on the direction of travel (anisotropic). The problem assumes a two-dimensional polygonal map of homogeneous cost regions for terrain representation constructed from elevation information. The anisotropic energy cost of vehicle motion has a non-braking component dependent on horizontal distance, a braking component dependent on vertical distance, and a constant path-independent component. The behavior of minimum-energy paths is then proved to be restricted to a small, but optimal set of traversalmore » types. An optimal-path-planning algorithm, using a heuristic search technique, reduces the infinite number of paths between the start and goal points to a finite number by generating sequences of goal-feasible window lists from analyzing the polygonal map and applying pruning criteria. The pruning criteria consist of visibility analysis, heading analysis, and region-boundary constraints. Each goal-feasible window lists specifies an associated convex optimization problem, and the best of all locally-optimal paths through the goal-feasible window lists is the globally-optimal path. These ideas have been implemented in a computer program, with results showing considerably better performance than the exponential average-case behavior predicted.« less
NASA Astrophysics Data System (ADS)
Shoemaker, Christine; Wan, Ying
2016-04-01
Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).
The latest developments and outlook for hydrogen liquefaction technology
NASA Astrophysics Data System (ADS)
Ohlig, K.; Decker, L.
2014-01-01
Liquefied hydrogen is presently mainly used for space applications and the semiconductor industry. While clean energy applications, for e.g. the automotive sector, currently contribute to this demand with a small share only, their demand may see a significant boost in the next years with the need for large scale liquefaction plants exceeding the current plant sizes by far. Hydrogen liquefaction for small scale plants with a maximum capacity of 3 tons per day (tpd) is accomplished with a Brayton refrigeration cycle using helium as refrigerant. This technology is characterized by low investment costs but lower process efficiency and hence higher operating costs. For larger plants, a hydrogen Claude cycle is used, characterized by higher investment but lower operating costs. However, liquefaction plants meeting the potentially high demand in the clean energy sector will need further optimization with regard to energy efficiency and hence operating costs. The present paper gives an overview of the currently applied technologies, including their thermodynamic and technical background. Areas of improvement are identified to derive process concepts for future large scale hydrogen liquefaction plants meeting the needs of clean energy applications with optimized energy efficiency and hence minimized operating costs. Compared to studies in this field, this paper focuses on application of new technology and innovative concepts which are either readily available or will require short qualification procedures. They will hence allow implementation in plants in the close future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pardon, D.V.; Faeth, M.T.; Curth, O.
1981-01-01
At International Marine Terminals' Plaquemines Parish Terminal, design optimization was accomplished by optimizing the dock pile bent spacing and designing the superstructure to distribute berthing impact forces and bollard pulls over a large number of pile bents. Also, by resisting all longitudinal forces acting on the dock at a single location near the center of the structure, the number of longitudinal batter piles was minimized and the need for costly expansion joints was eliminated. Computer techniques were utilized to analyze and optimize the design of the new dock. Pile driving procedures were evaluated utilizing a wave equation technique. Tripod dolphinsmore » with a resilient fender system were provided. The resilent fender system, a combination of rubber shear type and wing type fenders, adds only a small percentage to the total cost of the dolphins but greatly increases their energy absorption capability.« less
Watershed Management Optimization Support Tool (WMOST) ...
EPA's Watershed Management Optimization Support Tool (WMOST) version 2 is a decision support tool designed to facilitate integrated water management by communities at the small watershed scale. WMOST allows users to look across management options in stormwater (including green infrastructure), wastewater, drinking water, and land conservation programs to find the least cost solutions. The pdf version of these presentations accompany the recorded webinar with closed captions to be posted on the WMOST web page. The webinar was recorded at the time a training workshop took place for EPA's Watershed Management Optimization Support Tool (WMOST, v2).
Modeling energetic and theoretical costs of thermoregulatory strategy.
Alford, John G; Lutterschmidt, William I
2012-01-01
Poikilothermic ectotherms have evolved behaviours that help them maintain or regulate their body temperature (T (b)) around a preferred or 'set point' temperature (T (set)). Thermoregulatory behaviors may range from body positioning to optimize heat gain to shuttling among preferred microhabitats to find appropriate environmental temperatures. We have modelled movement patterns between an active and non-active shuttling behaviour within a habitat (as a biased random walk) to investigate the potential cost of two thermoregulatory strategies. Generally, small-bodied ectotherms actively thermoregulate while large-bodied ectotherms may passively thermoconform to their environment. We were interested in the potential energetic cost for a large-bodied ectotherm if it were forced to actively thermoregulate rather than thermoconform. We therefore modelled movements and the resulting and comparative energetic costs in precisely maintaining a T (set) for a small-bodied versus large-bodied ectotherm to study and evaluate the thermoregulatory strategy.
Pigovian taxes which work in the small-number case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wittman, D.
1985-06-01
An appropriately conceived pollution tax can achieve a Pareto optimal equilibrium which is (1) stable in the presence of myopia, (2) not subject to strategic manipulation even in the small-number case, and (3) resistant to inefficient cost shifting by the participants when transaction costs are low. A considerable amount of confusion in the literature exists because different authors use different tax formulas (often implicitly) and different assumptions regarding conjectural behavior. Some of this confusion is cleared up by formally presenting various Pigovian tax formulas, explicitly considering whether there is Cournot or Stakleberg behavior, and comparing the properties of the variousmore » configurations. The author argues that charging for mitigated marginal cost rather than for actual damage avoids many pitfalls typically associated with Pignovian taxes. 15 references, 1 table.« less
Liu, Lu; Masfary, Osama; Antonopoulos, Nick
2012-01-01
The increasing trends of electrical consumption within data centres are a growing concern for business owners as they are quickly becoming a large fraction of the total cost of ownership. Ultra small sensors could be deployed within a data centre to monitor environmental factors to lower the electrical costs and improve the energy efficiency. Since servers and air conditioners represent the top users of electrical power in the data centre, this research sets out to explore methods from each subsystem of the data centre as part of an overall energy efficient solution. In this paper, we investigate the current trends of Green IT awareness and how the deployment of small environmental sensors and Site Infrastructure equipment optimization techniques which can offer a solution to a global issue by reducing carbon emissions.
Surrogate based wind farm layout optimization using manifold mapping
NASA Astrophysics Data System (ADS)
Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester
2016-09-01
High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.
Wagner, Bridget K.; Clemons, Paul A.
2009-01-01
Discovering small-molecule modulators for thousands of gene products requires multiple stages of biological testing, specificity evaluation, and chemical optimization. Many cellular profiling methods, including cellular sensitivity, gene-expression, and cellular imaging, have emerged as methods to assess the functional consequences of biological perturbations. Cellular profiling methods applied to small-molecule science provide opportunities to use complex phenotypic information to prioritize and optimize small-molecule structures simultaneously against multiple biological endpoints. As throughput increases and cost decreases for such technologies, we see an emerging paradigm of using more information earlier in probe- and drug-discovery efforts. Moreover, increasing access to public datasets makes possible the construction of “virtual” profiles of small-molecule performance, even when multiplexed measurements were not performed or when multidimensional profiling was not the original intent. We review some key conceptual advances in small-molecule phenotypic profiling, emphasizing connections to other information, such as protein-binding measurements, genetic perturbations, and cell states. We argue that to maximally leverage these measurements in probe and drug discovery requires a fundamental connection to synthetic chemistry, allowing the consequences of synthetic decisions to be described in terms of changes in small-molecule profiles. Mining such data in the context of chemical structure and synthesis strategies can inform decisions about chemistry procurement and library development, leading to optimal small-molecule screening collections. PMID:19825513
Hiwasa-Tanase, Kyoko; Ezura, Hiroshi
2016-01-01
Crop cultivation in controlled environment plant factories offers great potential to stabilize the yield and quality of agricultural products. However, many crops are currently unsuited to these environments, particularly closed cultivation systems, due to space limitations, low light intensity, high implementation costs, and high energy requirements. A major barrier to closed system cultivation is the high running cost, which necessitates the use of high-margin crops for economic viability. High-value crops include those with enhanced nutritional value or containing additional functional components for pharmaceutical production or with the aim of providing health benefits. In addition, it is important to develop cultivars equipped with growth parameters that are suitable for closed cultivation. Small plant size is of particular importance due to the limited cultivation space. Other advantageous traits are short production cycle, the ability to grow under low light, and high nutriculture availability. Cost-effectiveness is improved from the use of cultivars that are specifically optimized for closed system cultivation. This review describes the features of closed cultivation systems and the potential application of molecular breeding to create crops that are optimized for cost-effectiveness and productivity in closed cultivation systems.
Hiwasa-Tanase, Kyoko; Ezura, Hiroshi
2016-01-01
Crop cultivation in controlled environment plant factories offers great potential to stabilize the yield and quality of agricultural products. However, many crops are currently unsuited to these environments, particularly closed cultivation systems, due to space limitations, low light intensity, high implementation costs, and high energy requirements. A major barrier to closed system cultivation is the high running cost, which necessitates the use of high-margin crops for economic viability. High-value crops include those with enhanced nutritional value or containing additional functional components for pharmaceutical production or with the aim of providing health benefits. In addition, it is important to develop cultivars equipped with growth parameters that are suitable for closed cultivation. Small plant size is of particular importance due to the limited cultivation space. Other advantageous traits are short production cycle, the ability to grow under low light, and high nutriculture availability. Cost-effectiveness is improved from the use of cultivars that are specifically optimized for closed system cultivation. This review describes the features of closed cultivation systems and the potential application of molecular breeding to create crops that are optimized for cost-effectiveness and productivity in closed cultivation systems. PMID:27200016
Optimal physiological structure of small neurons to guarantee stable information processing
NASA Astrophysics Data System (ADS)
Zeng, S. Y.; Zhang, Z. Z.; Wei, D. Q.; Luo, X. S.; Tang, W. Y.; Zeng, S. W.; Wang, R. F.
2013-02-01
Spike is the basic element for neuronal information processing and the spontaneous spiking frequency should be less than 1 Hz for stable information processing. If the neuronal membrane area is small, the frequency of neuronal spontaneous spiking caused by ion channel noise may be high. Therefore, it is important to suppress the deleterious spontaneous spiking of the small neurons. We find by simulation of stochastic neurons with Hodgkin-Huxley-type channels that the leakage system is critical and extremely efficient to suppress the spontaneous spiking and guarantee stable information processing of the small neurons. However, within the physiological limit the potassium system cannot do so. The suppression effect of the leakage system is super-exponential, but that of the potassium system is quasi-linear. With the minor physiological cost and the minimal consumption of metabolic energy, a slightly lower reversal potential and a relatively larger conductance of the leakage system give the optimal physiological structure to suppress the deleterious spontaneous spiking and guarantee stable information processing of small neurons, dendrites and axons.
Energetic constraints, size gradients, and size limits in benthic marine invertebrates.
Sebens, Kenneth P
2002-08-01
Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.
Messiah College Biodiesel Fuel Generation Project Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zummo, Michael M; Munson, J; Derr, A
Many obvious and significant concerns arise when considering the concept of small-scale biodiesel production. Does the fuel produced meet the stringent requirements set by the commercial biodiesel industry? Is the process safe? How are small-scale producers collecting and transporting waste vegetable oil? How is waste from the biodiesel production process handled by small-scale producers? These concerns and many others were the focus of the research preformed in the Messiah College Biodiesel Fuel Generation project over the last three years. This project was a unique research program in which undergraduate engineering students at Messiah College set out to research the feasibilitymore » of small-biodiesel production for application on a campus of approximately 3000 students. This Department of Energy (DOE) funded research program developed out of almost a decade of small-scale biodiesel research and development work performed by students at Messiah College. Over the course of the last three years the research team focused on four key areas related to small-scale biodiesel production: Quality Testing and Assurance, Process and Processor Research, Process and Processor Development, and Community Education. The objectives for the Messiah College Biodiesel Fuel Generation Project included the following: 1. Preparing a laboratory facility for the development and optimization of processors and processes, ASTM quality assurance, and performance testing of biodiesel fuels. 2. Developing scalable processor and process designs suitable for ASTM certifiable small-scale biodiesel production, with the goals of cost reduction and increased quality. 3. Conduct research into biodiesel process improvement and cost optimization using various biodiesel feedstocks and production ingredients.« less
Application of ATAD technology for digesting sewage sludge in small towns: Operation and costs.
Martín, M A; Gutiérrez, M C; Dios, M; Siles, J A; Chica, A F
2018-06-01
In an economic context marked by increasing energy costs and stricter legislation regarding the landfill disposal of wastewater treatment plant (WWTP) sewage sludge, and where biomethanization is difficult to implement in small WWTPs, an efficient alternative is required to manage this polluting waste. This study shows that autothermal thermophilic aerobic digestion (ATAD) is a feasible technique for treating sewage sludge in small- and medium-sized towns. The experiments were carried out at pilot scale on a cyclical basis and in continuous mode for nine months. The main results showed an optimal hydraulic retention time of 7 days, which led to an organic matter removal of 34%. The sanitized sludge meets the microbial quality standards for agronomic application set out in the proposed European sewage sludge directive. An economic assessment for the operation of ATAD technology was carried out, showing a treatment cost of €6.5/ton for dewatered sludge. Copyright © 2018 Elsevier Ltd. All rights reserved.
Combined control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Milman, M.; Bruno, R.; Scheid, R.; Gibson, S.
1989-01-01
An approach for combined control-structure optimization keyed to enhancing early design trade-offs is outlined and illustrated by numerical examples. The approach employs a homotopic strategy and appears to be effective for generating families of designs that can be used in these early trade studies. Analytical results were obtained for classes of structure/control objectives with linear quadratic Gaussian (LQG) and linear quadratic regulator (LQR) costs. For these, researchers demonstrated that global optima can be computed for small values of the homotopy parameter. Conditions for local optima along the homotopy path were also given. Details of two numerical examples employing the LQR control cost were given showing variations of the optimal design variables along the homotopy path. The results of the second example suggest that introducing a second homotopy parameter relating the two parts of the control index in the LQG/LQR formulation might serve to enlarge the family of Pareto optima, but its effect on modifying the optimal structural shapes may be analogous to the original parameter lambda.
The latest developments and outlook for hydrogen liquefaction technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohlig, K.; Decker, L.
2014-01-29
Liquefied hydrogen is presently mainly used for space applications and the semiconductor industry. While clean energy applications, for e.g. the automotive sector, currently contribute to this demand with a small share only, their demand may see a significant boost in the next years with the need for large scale liquefaction plants exceeding the current plant sizes by far. Hydrogen liquefaction for small scale plants with a maximum capacity of 3 tons per day (tpd) is accomplished with a Brayton refrigeration cycle using helium as refrigerant. This technology is characterized by low investment costs but lower process efficiency and hence highermore » operating costs. For larger plants, a hydrogen Claude cycle is used, characterized by higher investment but lower operating costs. However, liquefaction plants meeting the potentially high demand in the clean energy sector will need further optimization with regard to energy efficiency and hence operating costs. The present paper gives an overview of the currently applied technologies, including their thermodynamic and technical background. Areas of improvement are identified to derive process concepts for future large scale hydrogen liquefaction plants meeting the needs of clean energy applications with optimized energy efficiency and hence minimized operating costs. Compared to studies in this field, this paper focuses on application of new technology and innovative concepts which are either readily available or will require short qualification procedures. They will hence allow implementation in plants in the close future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, E.; Wang, L.; Gonder, J.
2013-10-01
Battery electric vehicles possess great potential for decreasing lifecycle costs in medium-duty applications, a market segment currently dominated by internal combustion technology. Characterized by frequent repetition of similar routes and daily return to a central depot, medium-duty vocations are well positioned to leverage the low operating costs of battery electric vehicles. Unfortunately, the range limitation of commercially available battery electric vehicles acts as a barrier to widespread adoption. This paper describes the National Renewable Energy Laboratory's collaboration with the U.S. Department of Energy and industry partners to analyze the use of small hydrogen fuel-cell stacks to extend the range ofmore » battery electric vehicles as a means of improving utility, and presumably, increasing market adoption. This analysis employs real-world vocational data and near-term economic assumptions to (1) identify optimal component configurations for minimizing lifecycle costs, (2) benchmark economic performance relative to both battery electric and conventional powertrains, and (3) understand how the optimal design and its competitiveness change with respect to duty cycle and economic climate. It is found that small fuel-cell power units provide extended range at significantly lower capital and lifecycle costs than additional battery capacity alone. And while fuel-cell range-extended vehicles are not deemed economically competitive with conventional vehicles given present-day economic conditions, this paper identifies potential future scenarios where cost equivalency is achieved.« less
Efficient distribution of toy products using ant colony optimization algorithm
NASA Astrophysics Data System (ADS)
Hidayat, S.; Nurpraja, C. A.
2017-12-01
CV Atham Toys (CVAT) produces wooden toys and furniture, comprises 13 small and medium industries. CVAT always attempt to deliver customer orders on time but delivery costs are high. This is because of inadequate infrastructure such that delivery routes are long, car maintenance costs are high, while fuel subsidy by the government is still temporary. This study seeks to minimize the cost of product distribution based on the shortest route using one of five Ant Colony Optimization (ACO) algorithms to solve the Vehicle Routing Problem (VRP). This study concludes that the best of the five is the Ant Colony System (ACS) algorithm. The best route in 1st week gave a total distance of 124.11 km at a cost of Rp 66,703.75. The 2nd week route gave a total distance of 132.27 km at a cost of Rp 71,095.13. The 3rd week best route gave a total distance of 122.70 km with a cost of Rp 65,951.25. While the 4th week gave a total distance of 132.27 km at a cost of Rp 74,083.63. Prior to this study there was no effort to calculate these figures.
Design optimization for cost and quality: The robust design approach
NASA Technical Reports Server (NTRS)
Unal, Resit
1990-01-01
Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.
NASA Astrophysics Data System (ADS)
Osei, Richard
There are many problems associated with operating a data center. Some of these problems include data security, system performance, increasing infrastructure complexity, increasing storage utilization, keeping up with data growth, and increasing energy costs. Energy cost differs by location, and at most locations fluctuates over time. The rising cost of energy makes it harder for data centers to function properly and provide a good quality of service. With reduced energy cost, data centers will have longer lasting servers/equipment, higher availability of resources, better quality of service, a greener environment, and reduced service and software costs for consumers. Some of the ways that data centers have tried to using to reduce energy costs include dynamically switching on and off servers based on the number of users and some predefined conditions, the use of environmental monitoring sensors, and the use of dynamic voltage and frequency scaling (DVFS), which enables processors to run at different combinations of frequencies with voltages to reduce energy cost. This thesis presents another method by which energy cost at data centers could be reduced. This method involves the use of Ant Colony Optimization (ACO) on a Quadratic Assignment Problem (QAP) in assigning user request to servers in geo-distributed data centers. In this paper, an effort to reduce data center energy cost involves the use of front portals, which handle users' requests, were used as ants to find cost effective ways to assign users requests to a server in heterogeneous geo-distributed data centers. The simulation results indicate that the ACO for Optimal Server Activation and Task Placement algorithm reduces energy cost on a small and large number of users' requests in a geo-distributed data center and its performance increases as the input data grows. In a simulation with 3 geo-distributed data centers, and user's resource request ranging from 25,000 to 25,000,000, the ACO algorithm was able to reduce energy cost on an average of $.70 per second. The ACO for Optimal Server Activation and Task Placement algorithm has proven to work as an alternative or improvement in reducing energy cost in geo-distributed data centers.
Liu, Lu; Masfary, Osama; Antonopoulos, Nick
2012-01-01
The increasing trends of electrical consumption within data centres are a growing concern for business owners as they are quickly becoming a large fraction of the total cost of ownership. Ultra small sensors could be deployed within a data centre to monitor environmental factors to lower the electrical costs and improve the energy efficiency. Since servers and air conditioners represent the top users of electrical power in the data centre, this research sets out to explore methods from each subsystem of the data centre as part of an overall energy efficient solution. In this paper, we investigate the current trends of Green IT awareness and how the deployment of small environmental sensors and Site Infrastructure equipment optimization techniques which can offer a solution to a global issue by reducing carbon emissions. PMID:22778660
NASA Astrophysics Data System (ADS)
L'Heureux, Zara E.
This thesis proposes that internal combustion piston engines can help clear the way for a transformation in the energy, chemical, and refining industries that is akin to the transition computer technology experienced with the shift from large mainframes to small personal computers and large farms of individually small, modular processing units. This thesis provides a mathematical foundation, multi-dimensional optimizations, experimental results, an engine model, and a techno-economic assessment, all working towards quantifying the value of repurposing internal combustion piston engines for new applications in modular, small-scale technologies, particularly for energy and chemical engineering systems. Many chemical engineering and power generation industries have focused on increasing individual unit sizes and centralizing production. This "bigger is better" concept makes it difficult to evolve and incorporate change. Large systems are often designed with long lifetimes, incorporate innovation slowly, and necessitate high upfront investment costs. Breaking away from this cycle is essential for promoting change, especially change happening quickly in the energy and chemical engineering industries. The ability to evolve during a system's lifetime provides a competitive advantage in a field dominated by large and often very old equipment that cannot respond to technology change. This thesis specifically highlights the value of small, mass-manufactured internal combustion piston engines retrofitted to participate in non-automotive system designs. The applications are unconventional and stem first from the observation that, when normalized by power output, internal combustion engines are one hundred times less expensive than conventional, large power plants. This cost disparity motivated a look at scaling laws to determine if scaling across both individual unit size and number of units produced would predict the two order of magnitude difference seen here. For the first time, this thesis provides a mathematical analysis of scaling with a combination of both changing individual unit size and varying the total number of units produced. Different paths to meet a particular cumulative capacity are analyzed and show that total costs are path dependent and vary as a function of the unit size and number of units produced. The path dependence identified is fairly weak, however, and for all practical applications, the underlying scaling laws seem unaffected. This analysis continues to support the interest in pursuing designs built around small, modular infrastructure. Building on the observation that internal combustion engines are an inexpensive power-producing unit, the first optimization in this thesis focuses on quantifying the value of engine capacity committing to deliver power in the day-ahead electricity and reserve markets, specifically based on pricing from the New York Independent System Operator (NYISO). An optimization was written in Python to determine, based on engine cost, fuel cost, engine wear, engine lifetime, and electricity prices, when and how much of an engine's power should be committed to a particular energy market. The optimization aimed to maximize profit for the engine and generator (engine genset) system acting as a price-taker. The result is an annual profit on the order of \\$30 per kilowatt. The most value in the engine genset is in its commitments to the spinning reserve market, where power is often committed but not always called on to deliver. This analysis highlights the benefits of modularity in energy generation and provides one example where the system is so inexpensive and short-lived, that the optimization views the engine replacement cost as a consumable operating expense rather than a capital cost. Having the opportunity to incorporate incremental technological improvements in a system's infrastructure throughout its lifetime allows introduction of new technology with higher efficiencies and better designs. An alternative to traditionally large infrastructure that locks in a design and today's state-of-the-art technology for the next 50 - 70 years, is a system designed to incorporate new technology in a modular fashion. The modular engine genset system used for power generation is one example of how this works in practice. The largest single component of this thesis is modeling, designing, retrofitting, and testing a reciprocating piston engine used as a compressor. Motivated again by the low cost of an internal combustion engine, this work looks at how an engine (which is, in its conventional form, essentially a reciprocating compressor) can be cost-effectively retrofitted to perform as a small-scale gas compressor. In the laboratory, an engine compressor was built by retrofitting a one-cylinder, 79 cc engine. Various retrofitting techniques were incorporated into the system design, and the engine compressor performance was quantified in each iteration. Because the retrofitted engine is now a power consumer rather than a power-producing unit, the engine compressor is driven in the laboratory with an electric motor. Experimentally, compressed air engine exhaust (starting at elevated inlet pressures) surpassed 650 psia (about 45 bar), which makes this system very attractive for many applications in chemical engineering and refining industries. A model of the engine compressor system was written in Python and incorporates experimentally-derived parameters to quantify gas leakage, engine friction, and flow (including backflow) through valves. The model as a whole was calibrated and verified with experimental data and is used to explore engine retrofits beyond what was tested in the laboratory. Along with the experimental and modeling work, a techno-economic assessment is included to compare the engine compressor system with state-of-the-art, commercially-available compressors. Included in the financial analysis is a case study where an engine compressor system is modeled to achieve specific compression needs. The result of the assessment is that, indeed, the low engine cost, even with the necessary retrofits, provides a cost advantage over incumbent compression technologies. Lastly, this thesis provides an algorithm and case study for another application of small-scale units in energy infrastructure, specifically in energy storage. This study focuses on quantifying the value of small-scale, onsite energy storage in shaving peak power demands. This case study focuses on university-level power demands. The analysis finds that, because peak power is so costly, even small amounts of energy storage, when dispatched optimally, can provide significant cost reductions. This provides another example of the value of small-scale implementations, particularly in energy infrastructure. While the study focuses on flywheels and batteries as the energy storage medium, engine gensets could also be used to deliver power and shave peak power demands. The overarching goal of this thesis is to introduce small-scale, modular infrastructure, with a particular focus on the opportunity to retrofit and repurpose inexpensive, mass-manufactured internal combustion engines in new and unconventional applications. The modeling and experimental work presented in this dissertation show very compelling results for engines incorporated into both energy generation infrastructure and chemical engineering industries via compression technologies. The low engine cost provides an opportunity to add retrofits whilst remaining cost competitive with the incumbent technology. This work supports the claim that modular infrastructure, built on the indivisible unit of an internal combustion engine, can revolutionize many industries by providing a low-cost mechanism for rapid change and promoting small-scale designs.
Vortex generator design for aircraft inlet distortion as a numerical optimization problem
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Levy, Ralph
1991-01-01
Aerodynamic compatibility of aircraft/inlet/engine systems is a difficult design problem for aircraft that must operate in many different flight regimes. Takeoff, subsonic cruise, supersonic cruise, transonic maneuvering, and high altitude loiter each place different constraints on inlet design. Vortex generators, small wing like sections mounted on the inside surfaces of the inlet duct, are used to control flow separation and engine face distortion. The design of vortex generator installations in an inlet is defined as a problem addressable by numerical optimization techniques. A performance parameter is suggested to account for both inlet distortion and total pressure loss at a series of design flight conditions. The resulting optimization problem is difficult since some of the design parameters take on integer values. If numerical procedures could be used to reduce multimillion dollar development test programs to a small set of verification tests, numerical optimization could have a significant impact on both cost and elapsed time to design new aircraft.
Castelán-Ortega, Octavio Alonso; Martínez-García, Carlos Galdino; Mould, Fergus L; Dorward, Peter; Rehman, Tahir; Rayas-Amor, Adolfo Armando
2016-06-01
This study evaluates the available on-farm resources of five case studies typified as small-scale dairy systems in central Mexico. A comprehensive mixed-integer linear programming model was developed and applied to two case studies. The optimal plan suggested the following: (1) instruction and utilization of maize silage, (2) alfalfa hay making that added US$140/ha/cut to the total net income, (3) allocation of land to cultivated pastures in a ratio of 27:41(cultivated pastures/maize crop) rather than at the current 14:69, and dairy cattle should graze 12 h/day, (4) to avoid grazing of communal pastures because this activity represented an opportunity cost of family labor that reduced the farm net income, and (5) that the highest farm net income was obtained when liquid milk and yogurt sales were included in the optimal plan. In the context of small-scale dairy systems of central Mexico, the optimal plan would need to be implemented gradually to enable farmers to develop required skills and to change management strategies from reliance on forage and purchased concentrate to pasture-based and conserved forage systems.
Energy aware path planning in complex four dimensional environments
NASA Astrophysics Data System (ADS)
Chakrabarty, Anjan
This dissertation addresses the problem of energy-aware path planning for small autonomous vehicles. While small autonomous vehicles can perform missions that are too risky (or infeasible) for larger vehicles, the missions are limited by the amount of energy that can be carried on board the vehicle. Path planning techniques that either minimize energy consumption or exploit energy available in the environment can thus increase range and endurance. Path planning is complicated by significant spatial (and potentially temporal) variations in the environment. While the main focus is on autonomous aircraft, this research also addresses autonomous ground vehicles. Range and endurance of small unmanned aerial vehicles (UAVs) can be greatly improved by utilizing energy from the atmosphere. Wind can be exploited to minimize energy consumption of a small UAV. But wind, like any other atmospheric component , is a space and time varying phenomenon. To effectively use wind for long range missions, both exploration and exploitation of wind is critical. This research presents a kinematics based tree algorithm which efficiently handles the four dimensional (three spatial and time) path planning problem. The Kinematic Tree algorithm provides a sequence of waypoints, airspeeds, heading and bank angle commands for each segment of the path. The planner is shown to be resolution complete and computationally efficient. Global optimality of the cost function cannot be claimed, as energy is gained from the atmosphere, making the cost function inadmissible. However the Kinematic Tree is shown to be optimal up to resolution if the cost function is admissible. Simulation results show the efficacy of this planning method for a glider in complex real wind data. Simulation results verify that the planner is able to extract energy from the atmosphere enabling long range missions. The Kinematic Tree planning framework, developed to minimize energy consumption of UAVs, is applied for path planning in ground robots. In traditional path planning problem the focus is on obstacle avoidance and navigation. The optimal Kinematic Tree algorithm named Kinematic Tree* is shown to find optimal paths to reach the destination while avoiding obstacles. A more challenging path planning scenario arises for planning in complex terrain. This research shows how the Kinematic Tree* algorithm can be extended to find minimum energy paths for a ground vehicle in difficult mountainous terrain.
Sparsely-synchronized brain rhythm in a small-world neural network
NASA Astrophysics Data System (ADS)
Kim, Sang-Yoon; Lim, Woochang
2013-07-01
Sparsely-synchronized cortical rhythms, associated with diverse cognitive functions, have been observed in electric recordings of brain activity. At the population level, cortical rhythms exhibit small-amplitude fast oscillations while at the cellular level, individual neurons show stochastic firings sparsely at a much lower rate than the population rate. We study the effect of network architecture on sparse synchronization in an inhibitory population of subthreshold Morris-Lecar neurons (which cannot fire spontaneously without noise). Previously, sparse synchronization was found to occur for cases of both global coupling ( i.e., regular all-to-all coupling) and random coupling. However, a real neural network is known to be non-regular and non-random. Here, we consider sparse Watts-Strogatz small-world networks which interpolate between a regular lattice and a random graph via rewiring. We start from a regular lattice with only short-range connections and then investigate the emergence of sparse synchronization by increasing the rewiring probability p for the short-range connections. For p = 0, the average synaptic path length between pairs of neurons becomes long; hence, only an unsynchronized population state exists because the global efficiency of information transfer is low. However, as p is increased, long-range connections begin to appear, and global effective communication between distant neurons may be available via shorter synaptic paths. Consequently, as p passes a threshold p th (}~ 0.044), sparsely-synchronized population rhythms emerge. However, with increasing p, longer axon wirings become expensive because of their material and energy costs. At an optimal value p* DE (}~ 0.24) of the rewiring probability, the ratio of the synchrony degree to the wiring cost is found to become maximal. In this way, an optimal sparse synchronization is found to occur at a minimal wiring cost in an economic small-world network through trade-off between synchrony and wiring cost.
NASA Astrophysics Data System (ADS)
Motta, Mario; Zhang, Shiwei
2018-05-01
We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.
Risk-based planning analysis for a single levee
NASA Astrophysics Data System (ADS)
Hui, Rui; Jachens, Elizabeth; Lund, Jay
2016-04-01
Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.
The optimal imaging strategy for patients with stable chest pain: a cost-effectiveness analysis.
Genders, Tessa S S; Petersen, Steffen E; Pugliese, Francesca; Dastidar, Amardeep G; Fleischmann, Kirsten E; Nieman, Koen; Hunink, M G Myriam
2015-04-07
The optimal imaging strategy for patients with stable chest pain is uncertain. To determine the cost-effectiveness of different imaging strategies for patients with stable chest pain. Microsimulation state-transition model. Published literature. 60-year-old patients with a low to intermediate probability of coronary artery disease (CAD). Lifetime. The United States, the United Kingdom, and the Netherlands. Coronary computed tomography (CT) angiography, cardiac stress magnetic resonance imaging, stress single-photon emission CT, and stress echocardiography. Lifetime costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios. The strategy that maximized QALYs and was cost-effective in the United States and the Netherlands began with coronary CT angiography, continued with cardiac stress imaging if angiography found at least 50% stenosis in at least 1 coronary artery, and ended with catheter-based coronary angiography if stress imaging induced ischemia of any severity. For U.K. men, the preferred strategy was optimal medical therapy without catheter-based coronary angiography if coronary CT angiography found only moderate CAD or stress imaging induced only mild ischemia. In these strategies, stress echocardiography was consistently more effective and less expensive than other stress imaging tests. For U.K. women, the optimal strategy was stress echocardiography followed by catheter-based coronary angiography if echocardiography induced mild or moderate ischemia. Results were sensitive to changes in the probability of CAD and assumptions about false-positive results. All cardiac stress imaging tests were assumed to be available. Exercise electrocardiography was included only in a sensitivity analysis. Differences in QALYs among strategies were small. Coronary CT angiography is a cost-effective triage test for 60-year-old patients who have nonacute chest pain and a low to intermediate probability of CAD. Erasmus University Medical Center.
RFID Technology for Continuous Monitoring of Physiological Signals in Small Animals.
Volk, Tobias; Gorbey, Stefan; Bhattacharyya, Mayukh; Gruenwald, Waldemar; Lemmer, Björn; Reindl, Leonhard M; Stieglitz, Thomas; Jansen, Dirk
2015-02-01
Telemetry systems enable researchers to continuously monitor physiological signals in unrestrained, freely moving small rodents. Drawbacks of common systems are limited operation time, the need to house the animals separately, and the necessity of a stable communication link. Furthermore, the costs of the typically proprietary telemetry systems reduce the acceptance. The aim of this paper is to introduce a low-cost telemetry system based on common radio frequency identification technology optimized for battery-independent operational time, good reusability, and flexibility. The presented implant is equipped with sensors to measure electrocardiogram, arterial blood pressure, and body temperature. The biological signals are transmitted as digital data streams. The device is able of monitoring several freely moving animals housed in groups with a single reader station. The modular concept of the system significantly reduces the costs to monitor multiple physiological functions and refining procedures in preclinical research.
Avoiding Braess' Paradox Through Collective Intelligence
NASA Technical Reports Server (NTRS)
Wolpert , David H.; Tumer, Kagan
1999-01-01
In an Ideal Shortest Path Algorithm (ISPA), at each moment each router in a network sends all of its traffic down the path that will incur the lowest cost to that traffic. In the limit of an infinitesimally small amount of traffic for a particular router, its routing that traffic via an ISPA is optimal, as far as cost incurred by that traffic is concerned. We demonstrate though that in many cases, due to the side-effects of one router's actions on another routers performance, having routers use ISPA's is suboptimal as far as global aggregate cost is concerned, even when only used to route infinitesimally small amounts of traffic. As a particular example of this we present an instance of Braess' paradox for ISPA'S, in which adding new links to a network decreases overall throughput. We also demonstrate that load-balancing, in which the routing decisions are made to optimize the global cost incurred by all traffic currently being routed, is suboptimal as far as global cost averaged across time is concerned. This is also due to "side-effects", in this case of current routing decision on future traffic. The theory of COllective INtelligence (COIN) is concerned precisely with the issue of avoiding such deleterious side-effects. We present key concepts from that theory and use them to derive an idealized algorithm whose performance is better than that of the ISPA, even in the infinitesimal limit. We present experiments verifying this, and also showing that a machine-learning-based version of this COIN algorithm in which costs are only imprecisely estimated (a version potentially applicable in the real world) also outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this COIN algorithm avoids Braess' paradox.
Surface effects on water storage under dryland summer fallow, a lysimeter study
USDA-ARS?s Scientific Manuscript database
Small changes in short and long term soil water storage can have large effects on crop productivity in semi-arid climates. To optimize tillage and residue management, we need to measure evaporation from a range of treatments on contrasting soil types. Sixty low-cost, low-maintenance lysimeters were ...
Effects of cost metric on cost-effectiveness of protected-area network design in urban landscapes.
Burkhalter, J C; Lockwood, J L; Maslo, B; Fenn, K H; Leu, K
2016-04-01
A common goal in conservation planning is to acquire areas that are critical to realizing biodiversity goals in the most cost-effective manner. The way monetary acquisition costs are represented in such planning is an understudied but vital component to realizing cost efficiencies. We sought to design a protected-area network within a forested urban region that would protect 17 birds of conservation concern. We compared the total costs and spatial structure of the optimal protected-area networks produced using three acquisition-cost surrogates (area, agricultural land value, and tax-assessed land value). Using the tax-assessed land values there was a 73% and 78% cost savings relative to networks derived using area or agricultural land value, respectively. This cost reduction was due to the considerable heterogeneity in acquisition costs revealed in tax-assessed land values, especially for small land parcels, and the corresponding ability of the optimization algorithm to identify lower-cost parcels for inclusion that had equal value to our target species. Tax-assessed land values also reflected the strong spatial differences in acquisition costs (US$0.33/m(2)-$55/m(2)) and thus allowed the algorithm to avoid inclusion of high-cost parcels when possible. Our results add to a nascent but growing literature that suggests conservation planners must consider the cost surrogate they use when designing protected-area networks. We suggest that choosing cost surrogates that capture spatial- and size-dependent heterogeneity in acquisition costs may be relevant to establishing protected areas in urbanizing ecosystems. © 2015 Society for Conservation Biology.
Life cycle costing with a discount rate
NASA Technical Reports Server (NTRS)
Posner, E. C.
1978-01-01
This article studies life cycle costing for a capability needed for the indefinite future, and specifically investigates the dependence of optimal policies on the discount rate chosen. The two costs considered are reprocurement cost and maintenance and operations (M and O) cost. The procurement price is assumed known, and the M and O costs are assumed to be a known function, in fact, a non-decreasing function, of the time since last reprocurement. The problem is to choose the optimum reprocurement time so as to minimize the quotient of the total cost over a reprocurement period divided by the period. Or one could assume a discount rate and try to minimize the total discounted costs into the indefinite future. It is shown that the optimum policy in the presence of a small discount rate hardly depends on the discount rate at all, and leads to essentially the same policy as in the case in which discounting is not considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenfeld, Julie K.; Bielicki, Jeffrey M.; Tao, Zhiyuan
Fractured shale formations are new potential target reservoirs for CO 2 capture and storage (CCS) and provide several potential advantages over storage in saline aquifers in terms of storage capacity, leakage risk, and cost savings from brownfield development. Here, we used a geospatial-optimization, engineering-economic model to investigate the sensitivity of integrated CCS networks in Ohio, Pennsylvania, and West Virginia to reductions in CO 2 capture costs. The resulting reductions in CO 2 capture costs were based on hypothetical cases where technological innovation reduced CO 2 capture costs. There were also small differences in the spatial organization of the CCS deploymentmore » when the capture costs were reduced. We also found that the percent reduction in average cost of CCS systems became smaller as the CO 2 capture costs were decreased.« less
Langenfeld, Julie K.; Bielicki, Jeffrey M.; Tao, Zhiyuan; ...
2017-08-18
Fractured shale formations are new potential target reservoirs for CO 2 capture and storage (CCS) and provide several potential advantages over storage in saline aquifers in terms of storage capacity, leakage risk, and cost savings from brownfield development. Here, we used a geospatial-optimization, engineering-economic model to investigate the sensitivity of integrated CCS networks in Ohio, Pennsylvania, and West Virginia to reductions in CO 2 capture costs. The resulting reductions in CO 2 capture costs were based on hypothetical cases where technological innovation reduced CO 2 capture costs. There were also small differences in the spatial organization of the CCS deploymentmore » when the capture costs were reduced. We also found that the percent reduction in average cost of CCS systems became smaller as the CO 2 capture costs were decreased.« less
Zhang, Pengfei; Hutton, David; Li, Qiu
2018-01-01
Objectives Erlotinib, the first generation of epidermoid growth factor receptor-tyrosine kinase inhibitor (EGFR-TKI), has been recommended as an essential treatment in patients with non-small-cell lung cancer (NSCLC) with EGFR mutation. Although it has improved progression-free survival (PFS), overall survival (OS) was limited and erlotinib can be expensive. This cost-effectiveness analysis compares erlotinib monotherapy with gemcitabine-included doublet chemotherapy. Setting First-line treatment of Asian patients with NSCLC with EGFR mutation. Methods A Markov model was created based on the results of the ENSURE (NCT01342965) and OPTIMAL (CTONG-0802) trials which evaluated erlotinib and chemotherapy. The model simulates cancer progression and all causes of death. All medical costs were calculated from the perspective of the Chinese healthcare system. Main outcome measures The primary outcomes are costs, quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs). Results The combined PFS was 11.81 months and 5.1 months for erlotinib and chemotherapy, respectively, while the OS was reversed at 24.68 months for erlotinib and 26.16 months for chemotherapy. The chemotherapy arm gained 0.13 QALYs compared with erlotinib monotherapy (1.17 QALYs vs 1.04 QALYs), while erlotinib had lower costs ($55 230 vs $77 669), resulting in an ICER of $174 808 per QALY for the chemotherapy arm, which exceeds three times the Chinese GDP per capita. The most influential factors were the health utility of PFS, the cost of erlotinib and the health utility of progressed disease. Conclusion Erlotinib monotherapy may be acceptable as a cost-effective first-line treatment for NSCLC compared with gemcitabine-based chemotherapy. The results were robust to changes in assumptions. Trial registration number NCT01342965 and CTONG-0802. PMID:29654023
Tolerance allocation for an electronic system using neural network/Monte Carlo approach
NASA Astrophysics Data System (ADS)
Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque
2001-12-01
The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.
Optimization study for the experimental configuration of CMB-S4
NASA Astrophysics Data System (ADS)
Barron, Darcy; Chinone, Yuji; Kusaka, Akito; Borril, Julian; Errard, Josquin; Feeney, Stephen; Ferraro, Simone; Keskitalo, Reijo; Lee, Adrian T.; Roe, Natalie A.; Sherwin, Blake D.; Suzuki, Aritoki
2018-02-01
The CMB Stage 4 (CMB-S4) experiment is a next-generation, ground-based experiment that will measure the cosmic microwave background (CMB) polarization to unprecedented accuracy, probing the signature of inflation, the nature of cosmic neutrinos, relativistic thermal relics in the early universe, and the evolution of the universe. CMB-S4 will consist of O(500,000) photon-noise-limited detectors that cover a wide range of angular scales in order to probe the cosmological signatures from both the early and late universe. It will measure a wide range of microwave frequencies to cleanly separate the CMB signals from galactic and extra-galactic foregrounds. To advance the progress towards designing the instrument for CMB-S4, we have established a framework to optimize the instrumental configuration to maximize its scientific output. The framework combines cost and instrumental models with a cosmology forecasting tool, and evaluates the scientific sensitivity as a function of various instrumental parameters. The cost model also allows us to perform the analysis under a fixed-cost constraint, optimizing for the scientific output of the experiment given finite resources. In this paper, we report our first results from this framework, using simplified instrumental and cost models. We have primarily studied two classes of instrumental configurations: arrays of large-aperture telescopes with diameters ranging from 2–10 m, and hybrid arrays that combine small-aperture telescopes (0.5-m diameter) with large-aperture telescopes. We explore performance as a function of telescope aperture size, distribution of the detectors into different microwave frequencies, survey strategy and survey area, low-frequency noise performance, and balance between small and large aperture telescopes for hybrid configurations. Both types of configurations must cover both large (~ degree) and small (~ arcmin) angular scales, and the performance depends on assumptions for performance vs. angular scale. The configurations with large-aperture telescopes have a shallow optimum around 4–6 m in aperture diameter, assuming that large telescopes can achieve good performance for low-frequency noise. We explore some of the uncertainties of the instrumental model and cost parameters, and we find that the optimum has a weak dependence on these parameters. The hybrid configuration shows an even broader optimum, spanning a range of 4–10 m in aperture for the large telescopes. We also present two strawperson configurations as an outcome of this optimization study, and we discuss some ideas for improving our simple cost and instrumental models used here. There are several areas of this analysis that deserve further improvement. In our forecasting framework, we adopt a simple two-component foreground model with spatially varying power-law spectral indices. We estimate de-lensing performance statistically and ignore non-idealities such as anisotropic mode coverage, boundary effect, and possible foreground residual. Instrumental systematics, which is not accounted for in our analyses, may also influence the conceptual design. Further study of the instrumental and cost models will be one of the main areas of study by the entire CMB-S4 community. We hope that our framework will be useful for estimating the influence of these improvements in the future, and we will incorporate them in order to further improve the optimization.
Flow range enhancement by secondary flow effect in low solidity circular cascade diffusers
NASA Astrophysics Data System (ADS)
Sakaguchi, Daisaku; Tun, Min Thaw; Mizokoshi, Kanata; Kishikawa, Daiki
2014-08-01
High-pressure ratio and wide operating range are highly required for compressors and blowers. The technical issue of the design is achievement of suppression of flow separation at small flow rate without deteriorating the efficiency at design flow rate. A numerical simulation is very effective in design procedure, however, cost of the numerical simulation is generally high during the practical design process, and it is difficult to confirm the optimal design which is combined with many parameters. A multi-objective optimization technique is the idea that has been proposed for solving the problem in practical design process. In this study, a Low Solidity circular cascade Diffuser (LSD) in a centrifugal blower is successfully designed by means of multi-objective optimization technique. An optimization code with a meta-model assisted evolutionary algorithm is used with a commercial CFD code ANSYS-CFX. The optimization is aiming at improving the static pressure coefficient at design point and at low flow rate condition while constraining the slope of the lift coefficient curve. Moreover, a small tip clearance of the LSD blade was applied in order to activate and to stabilize the secondary flow effect at small flow rate condition. The optimized LSD blade has an extended operating range of 114 % towards smaller flow rate as compared to the baseline design without deteriorating the diffuser pressure recovery at design point. The diffuser pressure rise and operating flow range of the optimized LSD blade are experimentally verified by overall performance test. The detailed flow in the diffuser is also confirmed by means of a Particle Image Velocimeter. Secondary flow is clearly captured by PIV and it spreads to the whole area of LSD blade pitch. It is found that the optimized LSD blade shows good improvement of the blade loading in the whole operating range, while at small flow rate the flow separation on the LSD blade has been successfully suppressed by the secondary flow effect.
Optimizing the selection of small-town wastewater treatment processes
NASA Astrophysics Data System (ADS)
Huang, Jianping; Zhang, Siqi
2018-04-01
Municipal wastewater treatment is energy-intensive. This high energy consumption causes high sewage treatment plant operating costs and increases the energy burden. To mitigate the adverse impacts of China’s development, sewage treatment plants should adopt effective energy-saving technologies. Artificial fortified natural water treatment and use of activated sludge and biofilm are all suitable technologies for small-town sewage treatment. This study features an analysis of the characteristics of small and medium-sized township sewage, an overview of current technologies, and a discussion of recent progress in sewage treatment. Based on this, an analysis of existing problems in municipal wastewater treatment is presented, and countermeasures to improve sewage treatment in small and medium-sized towns are proposed.
Economic evaluation of genomic selection in small ruminants: a sheep meat breeding program.
Shumbusho, F; Raoul, J; Astruc, J M; Palhiere, I; Lemarié, S; Fugeray-Scarbel, A; Elsen, J M
2016-06-01
Recent genomic evaluation studies using real data and predicting genetic gain by modeling breeding programs have reported moderate expected benefits from the replacement of classic selection schemes by genomic selection (GS) in small ruminants. The objectives of this study were to compare the cost, monetary genetic gain and economic efficiency of classic selection and GS schemes in the meat sheep industry. Deterministic methods were used to model selection based on multi-trait indices from a sheep meat breeding program. Decisional variables related to male selection candidates and progeny testing were optimized to maximize the annual monetary genetic gain (AMGG), that is, a weighted sum of meat and maternal traits annual genetic gains. For GS, a reference population of 2000 individuals was assumed and genomic information was available for evaluation of male candidates only. In the classic selection scheme, males breeding values were estimated from own and offspring phenotypes. In GS, different scenarios were considered, differing by the information used to select males (genomic only, genomic+own performance, genomic+offspring phenotypes). The results showed that all GS scenarios were associated with higher total variable costs than classic selection (if the cost of genotyping was 123 euros/animal). In terms of AMGG and economic returns, GS scenarios were found to be superior to classic selection only if genomic information was combined with their own meat phenotypes (GS-Pheno) or with their progeny test information. The predicted economic efficiency, defined as returns (proportional to number of expressions of AMGG in the nucleus and commercial flocks) minus total variable costs, showed that the best GS scenario (GS-Pheno) was up to 15% more efficient than classic selection. For all selection scenarios, optimization increased the overall AMGG, returns and economic efficiency. As a conclusion, our study shows that some forms of GS strategies are more advantageous than classic selection, provided that GS is already initiated (i.e. the initial reference population is available). Optimizing decisional variables of the classic selection scheme could be of greater benefit than including genomic information in optimized designs.
Optimal regulatory strategies for metabolic pathways in Escherichia coli depending on protein costs
Wessely, Frank; Bartl, Martin; Guthke, Reinhard; Li, Pu; Schuster, Stefan; Kaleta, Christoph
2011-01-01
While previous studies have shed light on the link between the structure of metabolism and its transcriptional regulation, the extent to which transcriptional regulation controls metabolism has not yet been fully explored. In this work, we address this problem by integrating a large number of experimental data sets with a model of the metabolism of Escherichia coli. Using a combination of computational tools including the concept of elementary flux patterns, methods from network inference and dynamic optimization, we find that transcriptional regulation of pathways reflects the protein investment into these pathways. While pathways that are associated to a high protein cost are controlled by fine-tuned transcriptional programs, pathways that only require a small protein cost are transcriptionally controlled in a few key reactions. As a reason for the occurrence of these different regulatory strategies, we identify an evolutionary trade-off between the conflicting requirements to reduce protein investment and the requirement to be able to respond rapidly to changes in environmental conditions. PMID:21772263
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aswad, Z.A.R.; Al-Hadad, S.M.S.
1983-03-01
The powerful Rosenbrock search technique, which optimizes both the search directions using the Gram-Schmidt procedure and the step size using the Fibonacci line search method, has been used to optimize the drilling program of an oil well drilled in Bai-Hassan oil field in Kirkuk, Iran, using the twodimensional drilling model of Galle and Woods. This model shows the effect of the two major controllable variables, weight on bit and rotary speed, on the drilling rate, while considering other controllable variables such as the mud properties, hydrostatic pressure, hydraulic design, and bit selection. The effect of tooth dullness on the drillingmore » rate is also considered. Increasing the weight on the drill bit with a small increase or decrease in ratary speed resulted in a significant decrease in the drilling cost for most bit runs. It was found that a 48% reduction in this cost and a 97-hour savings in the total drilling time was possible under certain conditions.« less
Distributed Wind Competitiveness Improvement Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-05-01
The Competitiveness Improvement Project (CIP) is a periodic solicitation through the U.S. Department of Energy and its National Renewable Energy Laboratory. Manufacturers of small and medium wind turbines are awarded cost-shared grants via a competitive process to optimize their designs, develop advanced manufacturing processes, and perform turbine testing. The goals of the CIP are to make wind energy cost competitive with other distributed generation technology and increase the number of wind turbine designs certified to national testing standards. This fact sheet describes the CIP and funding awarded as part of the project.
Information Switching Processor (ISP) contention analysis and control
NASA Technical Reports Server (NTRS)
Shyy, D.; Inukai, T.
1993-01-01
Future satellite communications, as a viable means of communications and an alternative to terrestrial networks, demand flexibility and low end-user cost. On-board switching/processing satellites potentially provide these features, allowing flexible interconnection among multiple spot beams, direct to the user communications services using very small aperture terminals (VSAT's), independent uplink and downlink access/transmission system designs optimized to user's traffic requirements, efficient TDM downlink transmission, and better link performance. A flexible switching system on the satellite in conjunction with low-cost user terminals will likely benefit future satellite network users.
Strategie de commande optimale de la production electrique dans un site isole
NASA Astrophysics Data System (ADS)
Barris, Nicolas
Hydro-Quebec manages more than 20 isolated power grids all over the province. The grids are located in small villages where the electricity demand is rather small. Those villages being far away from each other and from the main electricity production facilities, energy is produced locally using diesel generators. Electricity production costs at the isolated power grids are very important due to elevated diesel prices and transportation costs. However, the price of electricity is the same for the entire province, with no regards to the production costs of the electricity consumed. These two factors combined result in yearly exploitation losses for Hydro-Quebec. For any given village, several diesel generators are required to satisfy the demand. When the load increases, it becomes necessary to increase the capacity either by adding a generator to the production or by switching to a more powerful generator. The same thing happens when the load decreases. Every decision regarding changes in the production is included in the control strategy, which is based on predetermined parameters. These parameters were specified according to empirical studies and the knowledge base of the engineers managing the isolated power grids, but without any optimisation approach. The objective of the presented work is to minimize the diesel consumption by optimizing the parameters included in the control strategy. Its impact would be to limit the exploitation losses generated by the isolated power grids and the CO2 equivalent emissions without adding new equipment or completely changing the nature of the strategy. To satisfy this objective, the isolated power grid simulator OPERA is used along with the optimization library NOMAD and the data of three villages in northern Quebec. The preliminary optimization instance for the first village showed that some modifications to the existing control strategy must be done to better achieve the minimization objective. The main optimization processes consist of three different optimization approaches: the optimization of one set of parameters for all the villages, the optimization of one set of parameters per village, and the optimization of one set of parameters per diesel generator configuration per village. In the first scenario, the optimization of one set of parameters for all the villages leads to compromises for all three villages without allowing a full potential reduction for any village. Therefore, it is proven that applying one set of parameters to all the villages is not suitable for finding an optimal solution. In the second scenario, the optimization of one set of parameters per village allows an improvement over the previous results. At this point, it is shown that it is crucial to remove from the production the less efficient configurations when they are next to more efficient configurations. In the third scenario, the optimization of one set of parameters per configuration per village requires a very large number of function evaluations but does not result in any satisfying solution. In order to improve the performance of the optimization, it has been decided that the problem structure would be used. Two different approaches are considered: optimizing one set of parameters at a time and optimizing different rules included in the control strategy one at a time. In both cases, results are similar but calculation costs differ, the second method being much more cost efficient. The optimal values of the ultimate rules parameters can be directly linked to the efficient transition points that favor an efficient operation of the isolated power grids. Indeed, these transition points are defined in such a way that the high efficiency zone of every configuration is used. Therefore, it seems possible to directly identify on the graphs these optimal transition points and define the parameters in the control strategy without even having to run any optimization process. The diesel consumption reduction for all three villages is about 1.9%. Considering elevated diesel costs and the existence of about 20 other isolated power grids, the use of the developed methods together with a calibration of OPERA would allow a substantial reduction of Hydro-Quebec's annual deficit. Also, since one of the developed methods is very cost effective and produces equivalent results, it could be possible to use it during other processes; for example, when buying new equipment for the grid it could be possible to assess its full potential, under an optimized control strategy, and improve the net present value.
Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.
Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric
2018-03-01
Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.
Eckermann, Simon; Karnon, Jon; Willan, Andrew R
2010-01-01
Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of partial EVPI may have potential value in refining overall research design. However, Occam's razor must be seriously considered in application of these VOI methods, given their increased complexity and current limitations in informing decision making, with restriction to EVPI rather than EVSI and not allowing for important decision-making contexts. Initial use of CLT methods to focus these more complex partial VOI methods towards where they may be useful in refining optimal overall trial design is suggested. Integrating CLT methods with such partial VOI methods to allow estimation of partial EVSI is suggested in future research to add value to the current VOI toolkit.
Nonlinear optimization simplified by hypersurface deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stillinger, F.H.; Weber, T.A.
1988-09-01
A general strategy is advanced for simplifying nonlinear optimization problems, the ant-lion method. This approach exploits shape modifications of the cost-function hypersurface which distend basins surrounding low-lying minima (including global minima). By intertwining hypersurface deformations with steepest-descent displacements, the search is concentrated on a small relevant subset of all minima. Specific calculations demonstrating the value of this method are reported for the partitioning of two classes of irregular but nonrandom graphs, the prime-factor graphs and the pi graphs. We also indicate how this approach can be applied to the traveling salesman problem and to design layout optimization, and that itmore » may be useful in combination with simulated annealing strategies.« less
The economics of bladder cancer: costs and considerations of caring for this disease.
Svatek, Robert S; Hollenbeck, Brent K; Holmäng, Sten; Lee, Richard; Kim, Simon P; Stenzl, Arnulf; Lotan, Yair
2014-08-01
Due to high recurrence rates, intensive surveillance strategies, and expensive treatment costs, the management of bladder cancer contributes significantly to medical costs. To provide a concise evaluation of contemporary cost-related challenges in the care of patients with bladder cancer. An emphasis is placed on the initial diagnosis of bladder cancer and therapy considerations for both non-muscle-invasive bladder cancer (NMIBC) and more advanced disease. A systematic review of the literature was performed using Medline (1966 to February 2011). Medical Subject Headings (MeSH) terms for search criteria included "bladder cancer, neoplasms" OR "carcinoma, transitional cell" AND all cost-related MeSH search terms. Studies evaluating the costs associated with of various diagnostic or treatment approaches were reviewed. Routine use of perioperative chemotherapy following complete transurethral resection of bladder tumor has been estimated to provide a cost savings. Routine office-based fulguration of small low-grade recurrences could decrease costs. Another potential important target for decreasing variation and cost lies in risk-modified surveillance strategies after initial bladder tumor removal to reduce the cost associated with frequent cystoscopic and radiographic procedures. Optimizing postoperative care after radical cystectomy has the potential to decrease length of stay and perioperative morbidity with substantial decreases in perioperative care expenses. The gemcitabine-cisplatin regimen has been estimated to result in a modest increase in cost effectiveness over methotrexate, vinblastine, doxorubicin, and cisplatin. Additional costs of therapies need to be balanced with effectiveness, and there are significant gaps in knowledge regarding optimal surveillance and treatment of both early and advanced bladder cancer. Regardless of disease severity, improvements in the efficiency of bladder cancer care to limit unnecessary interventions and optimize effective cancer treatment can reduce overall health care costs. Two scenarios where economic and comparative-effectiveness research is limited but would be most beneficial are (1) the management of NMIBC patients where excessive costs are due to vigilant surveillance strategies and (2) in patients with metastatic disease due to the enormous cost associated with late-stage and end-of-life care. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
An approach to modeling and optimization of integrated renewable energy system (ires)
NASA Astrophysics Data System (ADS)
Maheshwari, Zeel
The purpose of this study was to cost optimize electrical part of IRES (Integrated Renewable Energy Systems) using HOMER and maximize the utilization of resources using MATLAB programming. IRES is an effective and a viable strategy that can be employed to harness renewable energy resources to energize remote rural areas of developing countries. The resource- need matching, which is the basis for IRES makes it possible to provide energy in an efficient and cost effective manner. Modeling and optimization of IRES for a selected study area makes IRES more advantageous when compared to hybrid concepts. A remote rural area with a population of 700 in 120 households and 450 cattle is considered as an example for cost analysis and optimization. Mathematical models for key components of IRES such as biogas generator, hydropower generator, wind turbine, PV system and battery banks are developed. A discussion of the size of water reservoir required is also presented. Modeling of IRES on the basis of need to resource and resource to need matching is pursued to help in optimum use of resources for the needs. Fixed resources such as biogas and water are used in prioritized order whereas movable resources such as wind and solar can be used simultaneously for different priorities. IRES is cost optimized for electricity demand using HOMER software that is developed by the NREL (National Renewable Energy Laboratory). HOMER optimizes configuration for electrical demand only and does not consider other demands such as biogas for cooking and water for domestic and irrigation purposes. Hence an optimization program based on the need-resource modeling of IRES is performed in MATLAB. Optimization of the utilization of resources for several needs is performed. Results obtained from MATLAB clearly show that the available resources can fulfill the demand of the rural areas. Introduction of IRES in rural communities has many socio-economic implications. It brings about improvement in living environment and community welfare by supplying the basic needs such as biogas for cooking, water for domestic and irrigation purposes and electrical energy for lighting, communication, cold storage, educational and small- scale industrial purposes.
Application of advanced technologies to small, short-haul aircraft
NASA Technical Reports Server (NTRS)
Andrews, D. G.; Brubaker, P. W.; Bryant, S. L.; Clay, C. W.; Giridharadas, B.; Hamamoto, M.; Kelly, T. J.; Proctor, D. K.; Myron, C. E.; Sullivan, R. L.
1978-01-01
The results of a preliminary design study which investigates the use of selected advanced technologies to achieve low cost design for small (50-passenger), short haul (50 to 1000 mile) transports are reported. The largest single item in the cost of manufacturing an airplane of this type is labor. A careful examination of advanced technology to airframe structure was performed since one of the most labor-intensive parts of the airplane is structures. Also, preliminary investigation of advanced aerodynamics flight controls, ride control and gust load alleviation systems, aircraft systems and turbo-prop propulsion systems was performed. The most beneficial advanced technology examined was bonded aluminum primary structure. The use of this structure in large wing panels and body sections resulted in a greatly reduced number of parts and fasteners and therefore, labor hours. The resultant cost of assembled airplane structure was reduced by 40% and the total airplane manufacturing cost by 16% - a major cost reduction. With further development, test verification and optimization appreciable weight saving is also achievable. Other advanced technology items which showed significant gains are as follows: (1) advanced turboprop-reduced block fuel by 15.30% depending on range; (2) configuration revisions (vee-tail)-empennage cost reduction of 25%; (3) leading-edge flap addition-weight reduction of 2500 pounds.
The magnitude and colour of noise in genetic negative feedback systems.
Voliotis, Margaritis; Bowsher, Clive G
2012-08-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.
Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.
Haghighi Mood, Kaveh; Lüchow, Arne
2017-08-17
Diffusion quantum Monte Carlo calculations with partial and full optimization of the guide function are carried out for the dissociation of the FeS molecule. For the first time, quantum Monte Carlo orbital optimization for transition metal compounds is performed. It is demonstrated that energy optimization of the orbitals of a complete active space wave function in the presence of a Jastrow correlation function is required to obtain agreement with the experimental dissociation energy. Furthermore, it is shown that orbital optimization leads to a 5 Δ ground state, in agreement with experiments but in disagreement with other high-level ab initio wave function calculations which all predict a 5 Σ + ground state. The role of the Jastrow factor in DMC calculations with pseudopotentials is investigated. The results suggest that a large Jastrow factor may improve the DMC accuracy substantially at small additional cost.
Efficient and equitable spatial allocation of renewable power plants at the country scale
NASA Astrophysics Data System (ADS)
Drechsler, Martin; Egerer, Jonas; Lange, Martin; Masurowski, Frank; Meyerhoff, Jürgen; Oehlmann, Malte
2017-09-01
Globally, the production of renewable energy is undergoing rapid growth. One of the most pressing issues is the appropriate allocation of renewable power plants, as the question of where to produce renewable electricity is highly controversial. Here we explore this issue through analysis of the efficient and equitable spatial allocation of wind turbines and photovoltaic power plants in Germany. We combine multiple methods, including legal analysis, economic and energy modelling, monetary valuation and numerical optimization. We find that minimum distances between renewable power plants and human settlements should be as small as is legally possible. Even small reductions in efficiency lead to large increases in equity. By considering electricity grid expansion costs, we find a more even allocation of power plants across the country than is the case when grid expansion costs are neglected.
NASA Astrophysics Data System (ADS)
Twelve small businesses who are developing equipment and computer programs for geophysics have won Small Business Innovative Research (SBIR) grants from the National Science Foundation for their 1989 proposals. The SBIR program was set up to encourage the private sector to undertake costly, advanced experimental work that has potential for great benefit.The geophysical research projects are a long-path intracavity laser spectrometer for measuring atmospheric trace gases, optimizing a local weather forecast model, a new platform for high-altitude atmospheric science, an advanced density logging tool, a deep-Earth sampling system, superconducting seismometers, a phased-array Doppler current profiler, monitoring mesoscale surface features of the ocean through automated analysis, krypton-81 dating in polar ice samples, discrete stochastic modeling of thunderstorm winds, a layered soil-synthetic liner base system to isolate buildings from earthquakes, and a low-cost continuous on-line organic-content monitor for water-quality determination.
A model of optimal voluntary muscular control.
FitzHugh, R
1977-07-19
In the absence of detailed knowledge of how the CNS controls a muscle through its motor fibers, a reasonable hypothesis is that of optimal control. This hypothesis is studied using a simplified mathematical model of a single muscle, based on A.V. Hill's equations, with series elastic element omitted, and with the motor signal represented by a single input variable. Two cost functions were used. The first was total energy expended by the muscle (work plus heat). If the load is a constant force, with no inertia, Hill's optimal velocity of shortening results. If the load includes a mass, analysis by optimal control theory shows that the motor signal to the muscle consists of three phases: (1) maximal stimulation to accelerate the mass to the optimal velocity as quickly as possible, (2) an intermediate level of stimulation to hold the velocity at its optimal value, once reached, and (3) zero stimulation, to permit the mass to slow down, as quickly as possible, to zero velocity at the specified distance shortened. If the latter distance is too small, or the mass too large, the optimal velocity is not reached, and phase (2) is absent. For lengthening, there is no optimal velocity; there are only two phases, zero stimulation followed by maximal stimulation. The second cost function was total time. The optimal control for shortening consists of only phases (1) and (3) above, and is identical to the minimal energy control whenever phase (2) is absent from the latter. Generalization of this model to include viscous loads and a series elastic element are discussed.
NASA Technical Reports Server (NTRS)
Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.
1981-01-01
Small solar thermal power systems (up to 10 MWe in size) were tested. The solar thermal power plant ranking study was performed to aid in experiment activity and support decisions for the selection of the most appropriate technological approach. The cost and performance were determined for insolation conditions by utilizing the Solar Energy Simulation computer code (SESII). This model optimizes the size of the collector field and energy storage subsystem for given engine generator and energy transport characteristics. The development of the simulation tool, its operation, and the results achieved from the analysis are discussed.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
Robust design of microchannel cooler
NASA Astrophysics Data System (ADS)
He, Ye; Yang, Tao; Hu, Li; Li, Leimin
2005-12-01
Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.
Streamflow variability and optimal capacity of run-of-river hydropower plants
NASA Astrophysics Data System (ADS)
Basso, S.; Botter, G.
2012-10-01
The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.
Søgaard, Rikke; Fischer, Barbara Malene B; Mortensen, Jann; Rasmussen, Torben R; Lassen, Ulrik
2013-01-01
To assess the expected costs and outcomes of alternative strategies for staging of lung cancer to inform a Danish National Health Service perspective about the most cost-effective strategy. A decision tree was specified for patients with a confirmed diagnosis of non-small-cell lung cancer. Six strategies were defined from relevant combinations of mediastinoscopy, endoscopic or endobronchial ultrasound with needle aspiration, and combined positron emission tomography-computed tomography with F18-fluorodeoxyglucose. Patients without distant metastases and central or contralateral nodal involvement (N2/N3) were considered to be candidates for surgical resection. Diagnostic accuracies were informed from literature reviews, prevalence and survival from the Danish Lung Cancer Registry, and procedure costs from national average tariffs. All parameters were specified probabilistically to determine the joint decision uncertainty. The cost-effectiveness analysis was based on the net present value of expected costs and life years accrued over a time horizon of 5 years. At threshold values of around €30,000 for cost-effectiveness, it was found to be cost-effective to send all patients to positron emission tomography-computed tomography with confirmation of positive findings on nodal involvement by endobronchial ultrasound. This result appeared robust in deterministic sensitivity analysis. The expected value of perfect information was estimated at €52 per patient, indicating that further research might be worthwhile. The policy recommendation is to make combined positron emission tomography-computed tomography and endobronchial ultrasound available for supplemental staging of patients with non-small-cell lung cancer. The effects of alternative strategies on patients' quality of life, however, should be examined in future studies. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soer, Wouter
LED luminaires have seen dramatic changes in cost breakdown over the past few years. The LED component cost, which until recently was the dominant portion of luminaire cost, has fallen to a level of the same order as the other luminaire components, such as the driver, housing, optics etc. With the current state of the technology, further luminaire performance improvement and cost reduction is realized most effectively by optimization of the whole system, rather than a single component. This project focuses on improving the integration between LEDs and drivers. Lumileds has developed a light engine platform based on low-cost high-powermore » LEDs and driver topologies optimized for integration with these LEDs on a single substrate. The integration of driver and LEDs enables an estimated luminaire cost reduction of about 25% for targeted applications, mostly due to significant reductions in driver and housing cost. The high-power LEDs are based on Lumileds’ patterned sapphire substrate flip-chip (PSS-FC) technology, affording reduced die fabrication and packaging cost compared to existing technology. Two general versions of PSS-FC die were developed in order to create the desired voltage and flux increments for driver integration: (i) small single-junction die (0.5 mm 2), optimal for distributed lighting applications, and (ii) larger multi-junction die (2 mm 2 and 4 mm 2) for high-power directional applications. Two driver topologies were developed: a tapped linear driver topology and a single-stage switch-mode topology, taking advantage of the flexible voltage configurations of the new PSS-FC die and the simplification opportunities enabled by integration of LEDs and driver on the same board. A prototype light engine was developed for an outdoor “core module” application based on the multi-junction PSS-FC die and the single-stage switch-mode driver. The light engine meets the project efficacy target of 128 lm/W at a luminous flux greater than 4100 lm, a correlated color temperature (CCT) of 4000K and a color rendering index (CRI) greater than 70.« less
Low-Cost Propellant Launch From a Tethered Balloon
NASA Technical Reports Server (NTRS)
Wilcox, Brian
2006-01-01
A document presents a concept for relatively inexpensive delivery of propellant to a large fuel depot in low orbit around the Earth, for use in rockets destined for higher orbits, the Moon, and for remote planets. The propellant is expected to be at least 85 percent of the mass needed in low Earth orbit to support the NASA Exploration Vision. The concept calls for the use of many small ( 10 ton) spin-stabilized, multistage, solid-fuel rockets to each deliver 250 kg of propellant. Each rocket would be winched up to a balloon tethered above most of the atmospheric mass (optimal altitude 26 2 km). There, the rocket would be aimed slightly above the horizon, spun, dropped, and fired at a time chosen so that the rocket would arrive in orbit near the depot. Small thrusters on the payload (powered, for example, by boil-off gases from cryogenic propellants that make up the payload) would precess the spinning rocket, using data from a low-cost inertial sensor to correct for small aerodynamic and solid rocket nozzle misalignment torques on the spinning rocket; would manage the angle of attack and the final orbit insertion burn; and would be fired on command from the depot in response to observations of the trajectory of the payload so as to make small corrections to bring the payload into a rendezvous orbit and despin it for capture by the depot. The system is low-cost because the small rockets can be mass-produced using the same techniques as those to produce automobiles and low-cost munitions, and one or more can be launched from a U.S. territory on the equator (Baker or Jarvis Islands in the mid-Pacific) to the fuel depot on each orbit (every 90 minutes, e.g., any multiple of 6,000 per year).
Development of Miniaturized Optimized Smart Sensors (MOSS) for space plasmas
NASA Technical Reports Server (NTRS)
Young, D. T.
1993-01-01
The cost of space plasma sensors is high for several reasons: (1) Most are one-of-a-kind and state-of-the-art, (2) the cost of launch to orbit is high, (3) ruggedness and reliability requirements lead to costly development and test programs, and (4) overhead is added by overly elaborate or generalized spacecraft interface requirements. Possible approaches to reducing costs include development of small 'sensors' (defined as including all necessary optics, detectors, and related electronics) that will ultimately lead to cheaper missions by reducing (2), improving (3), and, through work with spacecraft designers, reducing (4). Despite this logical approach, there is no guarantee that smaller sensors are necessarily either better or cheaper. We have previously advocated applying analytical 'quality factors' to plasma sensors (and spacecraft) and have begun to develop miniaturized particle optical systems by applying quantitative optimization criteria. We are currently designing a Miniaturized Optimized Smart Sensor (MOSS) in which miniaturized electronics (e.g., employing new power supply topology and extensive us of gate arrays and hybrid circuits) are fully integrated with newly developed particle optics to give significant savings in volume and mass. The goal of the SwRI MOSS program is development of a fully self-contained and functional plasma sensor weighing 1 lb and requiring 1 W. MOSS will require only a typical spacecraft DC power source (e.g., 30 V) and command/data interfaces in order to be fully functional, and will provide measurement capabilities comparable in most ways to current sensors.
NASA Astrophysics Data System (ADS)
Ning, A.; Dykes, K.
2014-06-01
For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.
Progress toward Modular UAS for Geoscience Applications
NASA Astrophysics Data System (ADS)
Dahlgren, R. P.; Clark, M. A.; Comstock, R. J.; Fladeland, M.; Gascot, H., III; Haig, T. H.; Lam, S. J.; Mazhari, A. A.; Palomares, R. R.; Pinsker, E. A.; Prathipati, R. T.; Sagaga, J.; Thurling, J. S.; Travers, S. V.
2017-12-01
Small Unmanned Aerial Systems (UAS) have become accepted tools for geoscience, ecology, agriculture, disaster response, land management, and industry. A variety of consumer UAS options exist as science and engineering payload platforms, but their incompatibilities with one another contribute to high operational costs compared with those of piloted aircraft. This research explores the concept of modular UAS, demonstrating airframes that can be reconfigured in the field for experimental optimization, to enable multi-mission support, facilitate rapid repair, or respond to changing field conditions. Modular UAS is revolutionary in allowing aircraft to be optimized around the payload, reversing the conventional wisdom of designing the payload to accommodate an unmodifiable aircraft. UAS that are reconfigurable like Legos™ are ideal for airborne science service providers, system integrators, instrument designers and end users to fulfill a wide range of geoscience experiments. Modular UAS facilitate the adoption of open-source software and rapid prototyping technology where design reuse is important in the context of a highly regulated industry like aerospace. The industry is now at a stage where consolidation, acquisition, and attrition will reduce the number of small manufacturers, with a reduction of innovation and motivation to reduce costs. Modularity leads to interface specifications, which can evolve into de facto or formal standards which contain minimum (but sufficient) details such that multiple vendors can then design to those standards and demonstrate interoperability. At that stage, vendor coopetition leads to robust interface standards, interoperability standards and multi-source agreements which in turn drive costs down significantly.
NASA Astrophysics Data System (ADS)
Edjlali, Ehsan; Bérubé-Lauzière, Yves
2018-01-01
We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.
The scope of additive manufacturing in cryogenics, component design, and applications
NASA Astrophysics Data System (ADS)
Stautner, W.; Vanapalli, S.; Weiss, K.-P.; Chen, R.; Amm, K.; Budesheim, E.; Ricci, J.
2017-12-01
Additive manufacturing techniques using composites or metals are rapidly gaining momentum in cryogenic applications. Small or large, complex structural components are now no longer limited to mere design studies but can now move into the production stream thanks to new machines on the market that allow for light-weight, cost optimized designs with short turnaround times. The potential for cost reductions from bulk materials machined to tight tolerances has become obvious. Furthermore, additive manufacturing opens doors and design space for cryogenic components that to date did not exist or were not possible in the past, using bulk materials along with elaborate and expensive machining processes, e.g. micromachining. The cryogenic engineer now faces the challenge to design toward those new additive manufacturing capabilities. Additionally, re-thinking designs toward cost optimization and fast implementation also requires detailed knowledge of mechanical and thermal properties at cryogenic temperatures. In the following we compile the information available to date and show a possible roadmap for additive manufacturing applications of parts and components typically used in cryogenic engineering designs.
Optimal sequence of tests for the mediastinal staging of non-small cell lung cancer.
Luque, Manuel; Díez, Francisco Javier; Disdier, Carlos
2016-01-26
Non-small cell lung cancer (NSCLC) is the most prevalent type of lung cancer and the most difficult to predict. When there are no distant metastases, the optimal therapy depends mainly on whether there are malignant lymph nodes in the mediastinum. Given the vigorous debate among specialists about which tests should be used, our goal was to determine the optimal sequence of tests for each patient. We have built an influence diagram (ID) that represents the possible tests, their costs, and their outcomes. This model is equivalent to a decision tree containing millions of branches. In the first evaluation, we only took into account the clinical outcomes (effectiveness). In the second, we used a willingness-to-pay of € 30,000 per quality adjusted life year (QALY) to convert economic costs into effectiveness. We assigned a second-order probability distribution to each parameter in order to conduct several types of sensitivity analysis. Two strategies were obtained using two different criteria. When considering only effectiveness, a positive computed tomography (CT) scan must be followed by a transbronchial needle aspiration (TBNA), an endobronchial ultrasound (EBUS), and an endoscopic ultrasound (EUS). When the CT scan is negative, a positron emission tomography (PET), EBUS, and EUS are performed. If the TBNA or the PET is positive, then a mediastinoscopy is performed only if the EBUS and EUS are negative. If the TBNA or the PET is negative, then a mediastinoscopy is performed only if the EBUS and the EUS give contradictory results. When taking into account economic costs, a positive CT scan is followed by a TBNA; an EBUS is done only when the CT scan or the TBNA is negative. This recommendation of performing a TBNA in certain cases should be discussed by the pneumology community because TBNA is a cheap technique that could avoid an EBUS, an expensive test, for many patients. We have determined the optimal sequence of tests for the mediastinal staging of NSCLC by considering sensitivity, specificity, and the economic cost of each test. The main novelty of our study is the recommendation of performing TBNA whenever the CT scan is positive. Our model is publicly available so that different experts can populate it with their own parameters and re-examine its conclusions. It is therefore proposed as an evidence-based instrument for reaching a consensus.
Computing the Feasible Spaces of Optimal Power Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Immunohistochemistry for predictive biomarkers in non-small cell lung cancer.
Mino-Kenudson, Mari
2017-10-01
In the era of targeted therapy, predictive biomarker testing has become increasingly important for non-small cell lung cancer. Of multiple predictive biomarker testing methods, immunohistochemistry (IHC) is widely available and technically less challenging, can provide clinically meaningful results with a rapid turn-around-time and is more cost efficient than molecular platforms. In fact, several IHC assays for predictive biomarkers have already been implemented in routine pathology practice. In this review, we will discuss: (I) the details of anaplastic lymphoma kinase (ALK) and proto-oncogene tyrosine-protein kinase ROS (ROS1) IHC assays including the performance of multiple antibody clones, pros and cons of IHC platforms and various scoring systems to design an optimal algorithm for predictive biomarker testing; (II) issues associated with programmed death-ligand 1 (PD-L1) IHC assays; (III) appropriate pre-analytical tissue handling and selection of optimal tissue samples for predictive biomarker IHC.
Immunohistochemistry for predictive biomarkers in non-small cell lung cancer
2017-01-01
In the era of targeted therapy, predictive biomarker testing has become increasingly important for non-small cell lung cancer. Of multiple predictive biomarker testing methods, immunohistochemistry (IHC) is widely available and technically less challenging, can provide clinically meaningful results with a rapid turn-around-time and is more cost efficient than molecular platforms. In fact, several IHC assays for predictive biomarkers have already been implemented in routine pathology practice. In this review, we will discuss: (I) the details of anaplastic lymphoma kinase (ALK) and proto-oncogene tyrosine-protein kinase ROS (ROS1) IHC assays including the performance of multiple antibody clones, pros and cons of IHC platforms and various scoring systems to design an optimal algorithm for predictive biomarker testing; (II) issues associated with programmed death-ligand 1 (PD-L1) IHC assays; (III) appropriate pre-analytical tissue handling and selection of optimal tissue samples for predictive biomarker IHC. PMID:29114473
Optimal variable-grid finite-difference modeling for porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Li, Haishan
2014-12-01
Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.
Computing the Feasible Spaces of Optimal Power Flow Problems
Molzahn, Daniel K.
2017-03-15
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Utilization of Optimization for Design of Morphing Wing Structures for Enhanced Flight
NASA Astrophysics Data System (ADS)
Detrick, Matthew Scott
Conventional aircraft control surfaces constrain maneuverability. This work is a comprehensive study that looks at both smart material and conventional actuation methods to achieve wing twist to potentially improve flight capability using minimal actuation energy while allowing minimal wing deformation under aerodynamic loading. A continuous wing is used in order to reduce drag while allowing the aircraft to more closely approximate the wing deformation used by birds while loitering. The morphing wing for this work consists of a skin supported by an underlying truss structure whose goal is to achieve a given roll moment using less actuation energy than conventional control surfaces. A structural optimization code has been written in order to achieve minimal wing deformation under aerodynamic loading while allowing wing twist under actuation. The multi-objective cost function for the optimization consists of terms that ensure small deformation under aerodynamic loading, small change in airfoil shape during wing twist, a linear variation of wing twist along the length of the wing, small deviation from the desired wing twist, minimal number of truss members, minimal wing weight, and minimal actuation energy. Hydraulic cylinders and a two member linkage driven by a DC motor are tested separately to provide actuation. Since the goal of the current work is simply to provide a roll moment, only one actuator is implemented along the wing span. Optimization is also used to find the best location within the truss structure for the actuator. The active structure produced by optimization is then compared to simulated and experimental results from other researchers as well as characteristics of conventional aircraft.
Jitrwung, Rujira; Yargeau, Viviane
2015-01-01
Crude glycerol from the biodiesel manufacturing process is being produced in increasing quantities due to the expanding number of biodiesel plants. It has been previously shown that, in batch mode, semi-anaerobic fermentation of crude glycerol by Enterobacter aerogenes can produce biohydrogen and bioethanol simultaneously. The present study demonstrated the possible scaling-up of this process from small batches performed in small bottles to a 3.6-L continuous stir tank reactor (CSTR). Fresh feed rate, liquid recycling, pH, mixing speed, glycerol concentration, and waste recycling were optimized for biohydrogen and bioethanol production. Results confirmed that E. aerogenes uses small amounts of oxygen under semi-anaerobic conditions for growth before using oxygen from decomposable salts, mainly NH4NO3, under anaerobic condition to produce hydrogen and ethanol. The optimal conditions were determined to be 500 rpm, pH 6.4, 18.5 g/L crude glycerol (15 g/L glycerol) and 33% liquid recycling for a fresh feed rate of 0.44 mL/min. Using these optimized conditions, the process ran at a lower media cost than previous studies, was stable after 7 days without further inoculation and resulted in yields of 0.86 mol H2/mol glycerol and 0.75 mol ethanol/mole glycerol. PMID:25970750
Jitrwung, Rujira; Yargeau, Viviane
2015-05-11
Crude glycerol from the biodiesel manufacturing process is being produced in increasing quantities due to the expanding number of biodiesel plants. It has been previously shown that, in batch mode, semi-anaerobic fermentation of crude glycerol by Enterobacter aerogenes can produce biohydrogen and bioethanol simultaneously. The present study demonstrated the possible scaling-up of this process from small batches performed in small bottles to a 3.6-L continuous stir tank reactor (CSTR). Fresh feed rate, liquid recycling, pH, mixing speed, glycerol concentration, and waste recycling were optimized for biohydrogen and bioethanol production. Results confirmed that E. aerogenes uses small amounts of oxygen under semi-anaerobic conditions for growth before using oxygen from decomposable salts, mainly NH4NO3, under anaerobic condition to produce hydrogen and ethanol. The optimal conditions were determined to be 500 rpm, pH 6.4, 18.5 g/L crude glycerol (15 g/L glycerol) and 33% liquid recycling for a fresh feed rate of 0.44 mL/min. Using these optimized conditions, the process ran at a lower media cost than previous studies, was stable after 7 days without further inoculation and resulted in yields of 0.86 mol H2/mol glycerol and 0.75 mol ethanol/mole glycerol.
Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines
NASA Astrophysics Data System (ADS)
Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian
2016-11-01
Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.
Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.
Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W
2016-01-01
To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.
High-power CO laser with RF discharge for isotope separation employing condensation repression
NASA Astrophysics Data System (ADS)
Baranov, I. Ya.; Koptev, A. V.
2008-10-01
High-power CO laser can be the effective tool in such applications as isotope separation using the free-jet CRISLA method. The way of transfer from CO small-scale experimental installation to industrial high-power CO lasers is proposed through the use of a low-current radio-frequency (RF) electric discharge in a supersonic stream without an electron gun. The calculation model of scaling CO laser with RF discharge in supersonic stream was developed. The developed model allows to calculate parameters of laser installation and optimize them with the purpose of reception of high efficiency and low cost of installation as a whole. The technical decision of industrial CO laser for isotope separation employing condensation repression is considered. The estimated cost of laser is some hundred thousand dollars USA and small sizes of laser head give possibility to install it in any place.
Thin sheets achieve optimal wrapping of liquids
NASA Astrophysics Data System (ADS)
Paulsen, Joseph; Démery, Vincent; Davidovitch, Benny; Santangelo, Christian; Russell, Thomas; Menon, Narayanan
2015-03-01
A liquid drop can wrap itself in a sheet using capillary forces [Py et al., PRL 98, 2007]. However, the efficiency of ``capillary origami'' at covering the surface of a drop is hampered by the mechanical cost of bending the sheet. Thinner sheets deform more readily by forming small-scale wrinkles and stress-focussing patterns, but it is unclear how coverage efficiency competes with mechanical cost as thickness is decreased, and what wrapping shapes will emerge. We place a thin (~ 100 nm) polymer film on a drop whose volume is gradually decreased so that the sheet covers an increasing fraction of its surface. The sheet exhibits a complex sequence of axisymmetric and polygonal partially- and fully- wrapped shapes. Remarkably, the progression appears independent of mechanical properties. The gross shape, which neglects small-scale features, is correctly predicted by a simple geometric approach wherein the exposed area is minimized. Thus, simply using a thin enough sheet results in maximal coverage.
The cost-effectiveness of diagnostic management strategies for adults with minor head injury.
Holmes, M W; Goodacre, S; Stevenson, M D; Pandor, A; Pickering, A
2012-09-01
To estimate the cost-effectiveness of diagnostic management strategies for adults with minor head injury. A mathematical model was constructed to evaluate the incremental costs and effectiveness (Quality Adjusted Life years Gained, QALYs) of ten diagnostic management strategies for adults with minor head injuries. Secondary analyses were undertaken to determine the cost-effectiveness of hospital admission compared to discharge home and to explore the cost-effectiveness of strategies when no responsible adult was available to observe the patient after discharge. The apparent optimal strategy was based on the high and medium risk Canadian CT Head Rule (CCHRhm), although the costs and outcomes associated with each strategy were broadly similar. Hospital admission for patients with non-neurosurgical injury on CT dominated discharge home, whilst hospital admission for clinically normal patients with a normal CT was not cost-effective compared to discharge home with or without a responsible adult at £39 and £2.5 million per QALY, respectively. A selective CT strategy with discharge home if the CT scan was normal remained optimal compared to not investigating or CT scanning all patients when there was no responsible adult available to observe them after discharge. Our economic analysis confirms that the recent extension of access to CT scanning for minor head injury is appropriate. Liberal use of CT scanning based on a high sensitivity decision rule is not only effective but also cost-saving. The cost of CT scanning is very small compared to the estimated cost of caring for patients with brain injury worsened by delayed treatment. It is recommended therefore that all hospitals receiving patients with minor head injury should have unrestricted access to CT scanning for use in conjunction with evidence based guidelines. Provisionally the CCHRhm decision rule appears to be the best strategy although there is considerable uncertainty around the optimal decision rule. However, the CCHRhm rule appears to be the most widely validated and it therefore seems appropriate to conclude that the CCHRhm rule has the best evidence to support its use. Copyright © 2011 Elsevier Ltd. All rights reserved.
Optimal Energy Management for Microgrids
NASA Astrophysics Data System (ADS)
Zhao, Zheng
Microgrid is a recent novel concept in part of the development of smart grid. A microgrid is a low voltage and small scale network containing both distributed energy resources (DERs) and load demands. Clean energy is encouraged to be used in a microgrid for economic and sustainable reasons. A microgrid can have two operational modes, the stand-alone mode and grid-connected mode. In this research, a day-ahead optimal energy management for a microgrid under both operational modes is studied. The objective of the optimization model is to minimize fuel cost, improve energy utilization efficiency and reduce gas emissions by scheduling generations of DERs in each hour on the next day. Considering the dynamic performance of battery as Energy Storage System (ESS), the model is featured as a multi-objectives and multi-parametric programming constrained by dynamic programming, which is proposed to be solved by using the Advanced Dynamic Programming (ADP) method. Then, factors influencing the battery life are studied and included in the model in order to obtain an optimal usage pattern of battery and reduce the correlated cost. Moreover, since wind and solar generation is a stochastic process affected by weather changes, the proposed optimization model is performed hourly to track the weather changes. Simulation results are compared with the day-ahead energy management model. At last, conclusions are presented and future research in microgrid energy management is discussed.
Tradeoffs between costs and greenhouse gas emissions in the design of urban transit systems
NASA Astrophysics Data System (ADS)
Griswold, Julia B.; Madanat, Samer; Horvath, Arpad
2013-12-01
Recent investments in the transit sector to address greenhouse gas emissions have concentrated on purchasing efficient replacement vehicles and inducing mode shift from the private automobile. There has been little focus on the potential of network and operational improvements, such as changes in headways, route spacing, and stop spacing, to reduce transit emissions. Most models of transit system design consider user and agency cost while ignoring emissions and the potential environmental benefit of operational improvements. We use a model to evaluate the user and agency costs as well as greenhouse gas benefit of design and operational improvements to transit systems. We examine how the operational characteristics of urban transit systems affect both costs and greenhouse gas emissions. The research identifies the Pareto frontier for designing an idealized transit network. Modes considered include bus, bus rapid transit (BRT), light rail transit (LRT), and metro (heavy) rail, with cost and emissions parameters appropriate for the United States. Passenger demand follows a many-to-many travel pattern with uniformly distributed origins and destinations. The approaches described could be used to optimize the network design of existing bus service or help to select a mode and design attributes for a new transit system. The results show that BRT provides the lowest cost but not the lowest emissions for our large city scenarios. Bus and LRT systems have low costs and the lowest emissions for our small city scenarios. Relatively large reductions in emissions from the cost-optimal system can be achieved with only minor increases in user travel time.
When to Wait for More Evidence? Real Options Analysis in Proton Therapy
Abrams, Keith R.; de Ruysscher, Dirk; Pijls-Johannesma, Madelon; Peters, Hans J.M.; Beutner, Eric; Lambin, Philippe; Joore, Manuela A.
2011-01-01
Purpose. Trends suggest that cancer spending growth will accelerate. One method for controlling costs is to examine whether the benefits of new technologies are worth the extra costs. However, especially new and emerging technologies are often more costly, while limited clinical evidence of superiority is available. In that situation it is often unclear whether to adopt the new technology now, with the risk of investing in a suboptimal therapy, or to wait for more evidence, with the risk of withholding patients their optimal treatment. This trade-off is especially difficult when it is costly to reverse the decision to adopt a technology, as is the case for proton therapy. Real options analysis, a technique originating from financial economics, assists in making this trade-off. Methods. We examined whether to adopt proton therapy, as compared to stereotactic body radiotherapy, in the treatment of inoperable stage I non-small cell lung cancer. Three options are available: adopt without further research; adopt and undertake a trial; or delay adoption and undertake a trial. The decision depends on the expected net gain of each option, calculated by subtracting its total costs from its expected benefits. Results. In The Netherlands, adopt and trial was found to be the preferred option, with an optimal sample size of 200 patients. Increase of treatment costs abroad and costs of reversal altered the preferred option. Conclusion. We have shown that real options analysis provides a transparent method of weighing the costs and benefits of adopting and/or further researching new and expensive technologies. PMID:22147003
Downside Risk Optimization of the Thrift Savings Plan Lifecycle Fund Portfolios
2010-03-01
ETF funds follow indices like the TSP individual funds but are valued by investors due to their “ stock -like” features and low administrative costs...investors worldwide. According to US News and World Report, actively managed stock funds lost nearly 41% on average in 2008 (Mardquardt, 2009...TSP funds: the Government Securities Investment (G) Fund, Fixed Income Index Investment (F) Fund, Common Stock Index Investment (C) Fund, Small
Optimizing conceptual aircraft designs for minimum life cycle cost
NASA Technical Reports Server (NTRS)
Johnson, Vicki S.
1989-01-01
A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.
The magnitude and colour of noise in genetic negative feedback systems
Voliotis, Margaritis; Bowsher, Clive G.
2012-01-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or ‘noise’ in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier—for transcriptional autorepression, it is frequently negligible. PMID:22581772
Effect of heliostat size on the levelized cost of electricity for power towers
NASA Astrophysics Data System (ADS)
Pidaparthi, Arvind; Hoffmann, Jaap
2017-06-01
The objective of this study is to investigate the effects of heliostat size on the levelized cost of electricity (LCOE) for power tower plants. These effects are analyzed in a power tower with a net capacity of 100 MWe, 8 hours of thermal energy storage and a solar multiple of 1.8 in Upington, South Africa. A large, medium and a small size heliostat with a total area of 115.56 m2, 43.3 m2 and 15.67 m2 respectively are considered for comparison. A radial-staggered pattern and an external cylindrical receiver are considered for the heliostat field layouts. The optical performance of the optimized heliostat field layouts has been evaluated by the Hermite (analytical) method using SolarPILOT, a tool used for the generation and optimization of the heliostat field layout. The heliostat cost per unit is calculated separately for the three different heliostat sizes and the effects due to size scaling, learning curve benefits and the price index is included. The annual operation and maintenance (O&M) costs are estimated separately for the three heliostat fields, where the number of personnel required in the field is determined by the number of heliostats in the field. The LCOE values are used as a figure of merit to compare the different heliostat sizes. The results, which include the economic and the optical performance along with the annual O&M costs, indicate that lowest LCOE values are achieved by the medium size heliostat with an area of 43.3 m2 for this configuration. This study will help power tower developers determine the optimal heliostat size for power tower plants currently in the development stage.
Valuation of plug-in vehicle life-cycle air emissions and oil displacement benefits
Michalek, Jeremy J.; Chester, Mikhail; Jaramillo, Paulina; Samaras, Constantine; Shiau, Ching-Shin Norman; Lave, Lester B.
2011-01-01
We assess the economic value of life-cycle air emissions and oil consumption from conventional vehicles, hybrid-electric vehicles (HEVs), plug-in hybrid-electric vehicles (PHEVs), and battery electric vehicles in the US. We find that plug-in vehicles may reduce or increase externality costs relative to grid-independent HEVs, depending largely on greenhouse gas and SO2 emissions produced during vehicle charging and battery manufacturing. However, even if future marginal damages from emissions of battery and electricity production drop dramatically, the damage reduction potential of plug-in vehicles remains small compared to ownership cost. As such, to offer a socially efficient approach to emissions and oil consumption reduction, lifetime cost of plug-in vehicles must be competitive with HEVs. Current subsidies intended to encourage sales of plug-in vehicles with large capacity battery packs exceed our externality estimates considerably, and taxes that optimally correct for externality damages would not close the gap in ownership cost. In contrast, HEVs and PHEVs with small battery packs reduce externality damages at low (or no) additional cost over their lifetime. Although large battery packs allow vehicles to travel longer distances using electricity instead of gasoline, large packs are more expensive, heavier, and more emissions intensive to produce, with lower utilization factors, greater charging infrastructure requirements, and life-cycle implications that are more sensitive to uncertain, time-sensitive, and location-specific factors. To reduce air emission and oil dependency impacts from passenger vehicles, strategies to promote adoption of HEVs and PHEVs with small battery packs offer more social benefits per dollar spent. PMID:21949359
Valuation of plug-in vehicle life-cycle air emissions and oil displacement benefits.
Michalek, Jeremy J; Chester, Mikhail; Jaramillo, Paulina; Samaras, Constantine; Shiau, Ching-Shin Norman; Lave, Lester B
2011-10-04
We assess the economic value of life-cycle air emissions and oil consumption from conventional vehicles, hybrid-electric vehicles (HEVs), plug-in hybrid-electric vehicles (PHEVs), and battery electric vehicles in the US. We find that plug-in vehicles may reduce or increase externality costs relative to grid-independent HEVs, depending largely on greenhouse gas and SO(2) emissions produced during vehicle charging and battery manufacturing. However, even if future marginal damages from emissions of battery and electricity production drop dramatically, the damage reduction potential of plug-in vehicles remains small compared to ownership cost. As such, to offer a socially efficient approach to emissions and oil consumption reduction, lifetime cost of plug-in vehicles must be competitive with HEVs. Current subsidies intended to encourage sales of plug-in vehicles with large capacity battery packs exceed our externality estimates considerably, and taxes that optimally correct for externality damages would not close the gap in ownership cost. In contrast, HEVs and PHEVs with small battery packs reduce externality damages at low (or no) additional cost over their lifetime. Although large battery packs allow vehicles to travel longer distances using electricity instead of gasoline, large packs are more expensive, heavier, and more emissions intensive to produce, with lower utilization factors, greater charging infrastructure requirements, and life-cycle implications that are more sensitive to uncertain, time-sensitive, and location-specific factors. To reduce air emission and oil dependency impacts from passenger vehicles, strategies to promote adoption of HEVs and PHEVs with small battery packs offer more social benefits per dollar spent.
Solar pond power plant feasibility study for Davis, California
NASA Technical Reports Server (NTRS)
Wu, Y. C.; Singer, M. J.; Marsh, H. E.; Harris, J.; Walton, A. L.
1982-01-01
The feasibility of constructing a solar pond power plant at Davis, California was studied. Site visits, weather data compilation, soil and water analyses, conceptual system design and analyses, a material and equipment market survey, conceptual site layout, and a preliminary cost estimate were studied. It was concluded that a solar pond power plant is technically feasible, but economically unattractive. The relatively small scale of the proposed plant and the high cost of importing salt resulted in a disproportionately high capital investment with respect to the annual energy production capacity of the plant. Cycle optimization and increased plant size would increase the economical attractiveness of the proposed concept.
Open-source meteor detection software for low-cost single-board computers
NASA Astrophysics Data System (ADS)
Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.
2016-01-01
This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.
Optimal periodic proof test based on cost-effective and reliability criteria
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1976-01-01
An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.
Offshore wind farm layout optimization
NASA Astrophysics Data System (ADS)
Elkinton, Christopher Neil
Offshore wind energy technology is maturing in Europe and is poised to make a significant contribution to the U.S. energy production portfolio. Building on the knowledge the wind industry has gained to date, this dissertation investigates the influences of different site conditions on offshore wind farm micrositing---the layout of individual turbines within the boundaries of a wind farm. For offshore wind farms, these conditions include, among others, the wind and wave climates, water depths, and soil conditions at the site. An analysis tool has been developed that is capable of estimating the cost of energy (COE) from offshore wind farms. For this analysis, the COE has been divided into several modeled components: major costs (e.g. turbines, electrical interconnection, maintenance, etc.), energy production, and energy losses. By treating these component models as functions of site-dependent parameters, the analysis tool can investigate the influence of these parameters on the COE. Some parameters result in simultaneous increases of both energy and cost. In these cases, the analysis tool was used to determine the value of the parameter that yielded the lowest COE and, thus, the best balance of cost and energy. The models have been validated and generally compare favorably with existing offshore wind farm data. The analysis technique was then paired with optimization algorithms to form a tool with which to design offshore wind farm layouts for which the COE was minimized. Greedy heuristic and genetic optimization algorithms have been tuned and implemented. The use of these two algorithms in series has been shown to produce the best, most consistent solutions. The influences of site conditions on the COE have been studied further by applying the analysis and optimization tools to the initial design of a small offshore wind farm near the town of Hull, Massachusetts. The results of an initial full-site analysis and optimization were used to constrain the boundaries of the farm. A more thorough optimization highlighted the features of the area that would result in a minimized COE. The results showed reasonable layout designs and COE estimates that are consistent with existing offshore wind farms.
Ergon, Torbjørn; Speakman, John R; Scantlebury, Michael; Cavanagh, Rachel; Lambin, Xavier
2004-03-01
Winter is energetically challenging for small herbivores because of greater energy requirements for thermogenesis at a time when little energy is available. We formulated a model predicting optimal wintering body size, accounting for the scaling of both energy expenditure and assimilation to body size, and the trade-off between survival benefits of a large size and avoiding survival costs of foraging. The model predicts that if the energy cost of maintaining a given body mass differs between environments, animals should be smaller in the more demanding environments, and there should be a negative correlation between body mass and daily energy expenditure (DEE) across environments. In contrast, if animals adjust their energy intake according to variation in survival costs of foraging, there should be a positive correlation between body mass and DEE. Decreasing temperature always increases equilibrium DEE, but optimal body mass may either increase or decrease in colder climates depending on the exact effects of temperature on mass-specific survival and energy demands. Measuring DEE with doubly labeled water on wintering Microtus agrestis at four field sites, we found that DEE was highest at the sites where voles were smallest despite a positive correlation between DEE and body mass within sites. This suggests that variation in wintering body mass between sites was due to variation in food quality/availability and not adjustments in foraging activity to varying risks of predation.
Optimizing luminescent solar concentrator design
Hernandez-Noyola, Hermilo; Potterveld, David H.; Holt, Roy J.; ...
2011-12-21
Luminescent Solar Concentrators (LSCs) use fluorescent materials and light guides to convert direct and diffuse sunlight into concentrated wavelength-shifted light that produces electrical power in small photovoltaic (PV) cells with the goal of significantly reducing the cost of solar energy utilization. In this paper we present an optimization analysis based on the implementation of a genetic algorithm (GA) subroutine to a numerical ray-tracing Monte Carlo model of an LSC, SIMSOLAR-P. The initial use of the GA implementation in SIMSOLAR-P is to find the optimal parameters of a hypothetical ‘‘perfect luminescent material’’ that obeys the Kennard Stepanov (K-S) thermodynamic relationship betweenmore » emission and absorption. The optimization balances the efficiency losses in the wavelength shift and PV conversion with the efficiency losses due to re-scattering of light out of the collector. The theoretical limits of efficiency are provided for one, two and three layer configurations; the results show that a single layer configuration is far from optimal and adding a second layer in the LSC with wavelength shifted material in the near infrared region significantly increases the power output, while the gain in power by adding a third layer is relatively small. Here, the results of this study provide a theoretical upper limit to the performance of an LSC and give guidance for the properties required for luminescent materials, such as quantum nanocrystals, to operate efficiently in planar LSC configurations« less
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
Concepts for 18/30 GHz satellite communication system study. Executive summary
NASA Technical Reports Server (NTRS)
Baker, M.; Davies, R.; Cuccia, L.; Mitchell, C.
1979-01-01
An examination of a multiplicity of interconnected parameters ranging from specific technology details to total system economic costs for satellite communication systems at the 18/30 GHz transmission bands are presented. It was determined that K sub A band systems can incur a small communications outage during very heavy rainfall periods and that reducing the outage to zero would lead to prohibitive system costs. On the other hand, the economics of scale, ie, one spacecraft accommodating 2.5 GHz of bandwidth coupled with multiple beam frequency reuse, leads to very low costs for those users who can tolerate the 5 to 50 hours per year of downtime. A multiple frequency band satellite network can provide the ultimate optimized match to the consumer performance/economics demands.
Robust guaranteed cost tracking control of quadrotor UAV with uncertainties.
Xu, Zhiwei; Nian, Xiaohong; Wang, Haibo; Chen, Yinsheng
2017-07-01
In this paper, a robust guaranteed cost controller (RGCC) is proposed for quadrotor UAV system with uncertainties to address set-point tracking problem. A sufficient condition of the existence for RGCC is derived by Lyapunov stability theorem. The designed RGCC not only guarantees the whole closed-loop system asymptotically stable but also makes the quadratic performance level built for the closed-loop system have an upper bound irrespective to all admissible parameter uncertainties. Then, an optimal robust guaranteed cost controller is developed to minimize the upper bound of performance level. Simulation results verify the presented control algorithms possess small overshoot and short setting time, with which the quadrotor has ability to perform set-point tracking task well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
[A program for optimizing the use of antimicrobials (PROA): experience in a regional hospital].
Ugalde-Espiñeira, J; Bilbao-Aguirregomezcorta, J; Sanjuan-López, A Z; Floristán-Imízcoz, C; Elorduy-Otazua, L; Viciola-García, M
2016-08-01
Programs for optimizing the use of antibiotics (PROA) or antimicrobial stewardship programs are multidisciplinary programs developed in response to the increase of antibiotic resistant bacteria, the objective of which are to improve clinical results, to minimize adverse events and to reduce costs associated with the use of antimicrobials. The implementation of a PROA program in a 128-bed general hospital and the results obtained at 6 months are here reported. An intervention quasi-experimental study with historical control group was designed with the objective of assessing the impact of a PROA program with a non-restrictive intervention model to help prescription, with a direct and bidirectional intervention. The basis of the program is an optimization audit of the use of antimicrobials with not imposed personalized recommendations and the use of information technologies applied to this setting. The impact on the pharmaceutical consumption and costs, cost per process, mean hospital stay, percentage of readmissions to the hospital are described. A total of 307 audits were performed. In 65.8% of cases, treatment was discontinued between the 7th and the 10th day. The main reasons of treatment discontinuation were completeness of treatment (43.6%) and lack of indication (14.7%). The reduction of pharmaceutical expenditure was 8.59% (P = 0.049) and 5.61% of the consumption in DDD/100 stays (P=0.180). The costs by processes in general surgery showed a 3.14% decrease (p=0.000). The results obtained support the efficiency of these programs in small size hospitals with limited resources.
The cost of noise reduction in commercial tilt rotor aircraft
NASA Technical Reports Server (NTRS)
Faulkner, H. B.
1974-01-01
The relationship between direct operating cost (DOC) and departure noise annoyance was developed for commercial tilt rotor aircraft. This was accomplished by generating a series of tilt rotor aircraft designs to meet various noise goals at minimum DOC. These vehicles were spaced across the spectrum of possible noise levels from completely unconstrained to the quietest vehicle that could be designed within the study ground rules. A group of optimization parameters were varied to find the minimum DOC while other inputs were held constant and some external constraints were met. This basic variation was then extended to different aircraft sizes and technology time frames. It was concluded that reducing noise annoyance by designing for lower rotor tip speeds is a very promising avenue for future research and development. It appears that the cost of halving the annoyance compared to an unconstrained design is insignificant and the cost of halving the annoyance again is small.
Dexter, Franklin; Abouleish, Amr E; Epstein, Richard H; Whitten, Charles W; Lubarsky, David A
2003-10-01
Potential benefits to reducing turnover times are both quantitative (e.g., complete more cases and reduce staffing costs) and qualitative (e.g., improve professional satisfaction). Analyses have shown the quantitative arguments to be unsound except for reducing staffing costs. We describe a methodology by which each surgical suite can use its own numbers to calculate its individual potential reduction in staffing costs from reducing its turnover times. Calculations estimate optimal allocated operating room (OR) time (based on maximizing OR efficiency) before and after reducing the maximum and average turnover times. At four academic tertiary hospitals, reductions in average turnover times of 3 to 9 min would result in 0.8% to 1.8% reductions in staffing cost. Reductions in average turnover times of 10 to 19 min would result in 2.5% to 4.0% reductions in staffing costs. These reductions in staffing cost are achieved predominantly by reducing allocated OR time, not by reducing the hours that staff work late. Heads of anesthesiology groups often serve on OR committees that are fixated on turnover times. Rather than having to argue based on scientific studies, this methodology provides the ability to show the specific quantitative effects (small decreases in staffing costs and allocated OR time) of reducing turnover time using a surgical suite's own data. Many anesthesiologists work at hospitals where surgeons and/or operating room (OR) committees focus repeatedly on turnover time reduction. We developed a methodology by which the reductions in staffing cost as a result of turnover time reduction can be calculated for each facility using its own data. Staffing cost reductions are generally very small and would be achieved predominantly by reducing allocated OR time to the surgeons.
Algorithms for Automated DNA Assembly
2010-01-01
polyketide synthase gene cluster. Proc. Natl Acad. Sci. USA, 101, 15573–15578. 16. Shetty,R.P., Endy,D. and Knight,T.F. Jr (2008) Engineering BioBrick vectors...correct theoretical construction scheme is de- veloped manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and...to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Ongoing or planned hydro research, results of recent studies, and reviews of new books, publications, and software. Items covered this month include: (1) a recommendation that dam designers give more consideration to earthquake resistance, (2) the development of a new wave rotor design, (3) the development of a small hydro database in China, and (4) an ICOLD bulletin on the optimization of constuction costs.
NASA Technical Reports Server (NTRS)
Rivera, J. M.; Simpson, R. W.
1980-01-01
The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.
Optimal water management and conflict resolution: The Middle East Water Project
NASA Astrophysics Data System (ADS)
Fisher, Franklin M.; Arlosoroff, Shaul; Eckstein, Zvi; Haddadin, Munther; Hamati, Salem G.; Huber-Lee, Annette; Jarrar, Ammar; Jayyousi, Anan; Shamir, Uri; Wesseling, Hans
2002-11-01
In many situations, actual water markets will not allocate water resources optimally, largely because of the perceived social value of water. It is possible, however, to build optimizing models which, taking account of demand as well as supply considerations, can substitute for actual markets. Such models can assist the formation of water policies, taking into account user-supplied values and constraints. They provide powerful tools for the system-wide cost-benefit analysis of infrastructure; this is illustrated by an analysis of the need for desalination in Israel and the cost and benefits of adding a conveyance line. Further, the use of such models can facilitate cooperation in water, yielding gains that can be considerably greater than the value of the disputed water itself. This can turn what appear to be zero-sum games into win-win situations. The Middle East Water Project has built such a model for the Israeli-Jordanian-Palestinian region. We find that the value of the water in dispute in the region is very small and the possible gains from cooperation are relatively large. Analysis of the scarcity value of water is a crucial feature.
Designing the X-Ray Microcalorimeter Spectrometer for Optimal Science Return
NASA Technical Reports Server (NTRS)
Ptak, Andrew; Bandler, Simon R.; Bookbinder, Jay; Kelley, Richard L.; Petre, Robert; Smith, Randall K.; Smith, Stephen
2013-01-01
Recent advances in X-ray microcalorimeters enable a wide range of possible focal plane designs for the X-ray Microcalorimeter Spectrometer (XMS) instrument on the future Advanced X-ray Spectroscopic Imaging Observatory (AXSIO) or X-ray Astrophysics Probe (XAP). Small pixel designs (75 microns) oversample a 5-10" PSF by a factor of 3-6 for a 10 m focal length, enabling observations at both high count rates and high energy resolution. Pixel designs utilizing multiple absorbers attached to single transition-edge sensors can extend the focal plane to cover a significantly larger field of view, albeit at a cost in maximum count rate and energy resolution. Optimizing the science return for a given cost and/or complexity is therefore a non-trivial calculation that includes consideration of issues such as the mission science drivers, likely targets, mirror size, and observing efficiency. We present a range of possible designs taking these factors into account and their impacts on the science return of future large effective-area X-ray spectroscopic missions.
NASA Astrophysics Data System (ADS)
Ba, Seydou N.; Waheed, Khurram; Zhou, G. Tong
2010-12-01
Digital predistortion is an effective means to compensate for the nonlinear effects of a memoryless system. In case of a cellular transmitter, a digital baseband predistorter can mitigate the undesirable nonlinear effects along the signal chain, particularly the nonlinear impairments in the radiofrequency (RF) amplifiers. To be practically feasible, the implementation complexity of the predistorter must be minimized so that it becomes a cost-effective solution for the resource-limited wireless handset. This paper proposes optimizations that facilitate the design of a low-cost high-performance adaptive digital baseband predistorter for memoryless systems. A comparative performance analysis of the amplitude and power lookup table (LUT) indexing schemes is presented. An optimized low-complexity amplitude approximation and its hardware synthesis results are also studied. An efficient LUT predistorter training algorithm that combines the fast convergence speed of the normalized least mean squares (NLMSs) with a small hardware footprint is proposed. Results of fixed-point simulations based on the measured nonlinear characteristics of an RF amplifier are presented.
SRM-Assisted Trajectory for the GTX Reference Vehicle
NASA Technical Reports Server (NTRS)
Riehl, John; Trefny, Charles; Kosareo, Daniel
2002-01-01
A goal of the GTX effort has been to demonstrate the feasibility of a single stage- to- orbit (SSTO) vehicle that delivers a small payload to low earth orbit. The small payload class was chosen in order to minimize the risk and cost of development of this revolutionary system. A preliminary design study by the GTX team has resulted in the current configuration that offers considerable promise for meeting the stated goal. The size and gross lift-off weight resulting from scaling the current design to closure however may be considered impractical for the small payload. In lieu of evolving the project's reference vehicle to a large-payload class, this paper offers the alternative of using solid-rocket motors in order to close the vehicle at a practical scale. This approach offers a near-term, quasi-reusable system that easily evolves to reusable SSTO following subsequent development and optimization. This paper presents an overview of the impact of the addition of SRM's to the GTX reference vehicle's performance and trajectory. The overall methods of vehicle modeling and trajectory optimization will also be presented. A key element in the trajectory optimization is the use of the program OTIS 3.10 that provides rapid convergence and a great deal of flexibility to the user. This paper will also present the methods used to implement GTX requirements into OTIS modeling.
SRM-Assisted Trajectory for the GTX Reference Vehicle
NASA Technical Reports Server (NTRS)
Riehl, John; Trefny, Charles; Kosareo, Daniel (Technical Monitor)
2002-01-01
A goal of the GTX effort has been to demonstrate the feasibility of a single stage-to-orbit (SSTO) vehicle that delivers a small payload to low earth orbit. The small payload class was chosen in order to minimize the risk and cost of development of this revolutionary system. A preliminary design study by the GTX team has resulted in the current configuration that offers considerable promise for meeting the stated goal. The size and gross lift-off weight resulting from scaling the current design to closure however may be considered impractical for the small payload. In lieu of evolving the project' reference vehicle to a large-payload class, this paper offers the alternative of using solid-rocket motors in order to close the vehicle at a practical scale. This approach offers a near-term, quasi-reusable system that easily evolves to reusable SSTO following subsequent development and optimization. This paper presents an overview of the impact of the addition of SRM's to the GTX reference vehicle#s performance and trajectory. The overall methods of vehicle modeling and trajectory optimization will also be presented. A key element in the trajectory optimization is the use of the program OTIS 3.10 that provides rapid convergence and a great deal of flexibility to the user. This paper will also present the methods used to implement GTX requirements into OTIS modeling.
Lockheed L-1011 Test Station on-board in support of the Adaptive Performance Optimization flight res
NASA Technical Reports Server (NTRS)
1997-01-01
This console and its compliment of computers, monitors and commmunications equipment make up the Research Engineering Test Station, the nerve center for a new aerodynamics experiment being conducted by NASA's Dryden Flight Research Center, Edwards, California. The equipment is installed on a modified Lockheed L-1011 Tristar jetliner operated by Orbital Sciences Corp., of Dulles, Va., for Dryden's Adaptive Performance Optimization project. The experiment seeks to improve the efficiency of long-range jetliners by using small movements of the ailerons to improve the aerodynamics of the wing at cruise conditions. About a dozen research flights in the Adaptive Performance Optimization project are planned over the next two to three years. Improving the aerodynamic efficiency should result in equivalent reductions in fuel usage and costs for airlines operating large, wide-bodied jetliners.
Energy Efficiency Challenges of 5G Small Cell Networks.
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-05-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.
Energy Efficiency Challenges of 5G Small Cell Networks
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-01-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
A fast Chebyshev method for simulating flexible-wing propulsion
NASA Astrophysics Data System (ADS)
Moore, M. Nicholas J.
2017-09-01
We develop a highly efficient numerical method to simulate small-amplitude flapping propulsion by a flexible wing in a nearly inviscid fluid. We allow the wing's elastic modulus and mass density to vary arbitrarily, with an eye towards optimizing these distributions for propulsive performance. The method to determine the wing kinematics is based on Chebyshev collocation of the 1D beam equation as coupled to the surrounding 2D fluid flow. Through small-amplitude analysis of the Euler equations (with trailing-edge vortex shedding), the complete hydrodynamics can be represented by a nonlocal operator that acts on the 1D wing kinematics. A class of semi-analytical solutions permits fast evaluation of this operator with O (Nlog N) operations, where N is the number of collocation points on the wing. This is in contrast to the minimum O (N2) cost of a direct 2D fluid solver. The coupled wing-fluid problem is thus recast as a PDE with nonlocal operator, which we solve using a preconditioned iterative method. These techniques yield a solver of near-optimal complexity, O (Nlog N) , allowing one to rapidly search the infinite-dimensional parameter space of all possible material distributions and even perform optimization over this space.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Minimization of bovine tuberculosis control costs in US dairy herds
Smith, Rebecca L.; Tauer, Loren W.; Schukken, Ynte H.; Lu, Zhao; Grohn, Yrjo T.
2013-01-01
The objective of this study was to minimize the cost of controlling an isolated bovine tuberculosis (bTB) outbreak in a US dairy herd, using a stochastic simulation model of bTB with economic and biological layers. A model optimizer produced a control program that required 2-month testing intervals (TI) with 2 negative whole-herd tests to leave quarantine. This control program minimized both farm and government costs. In all cases, test-and-removal costs were lower than depopulation costs, although the variability in costs increased for farms with high holding costs or small herd sizes. Increasing herd size significantly increased costs for both the farm and the government, while increasing indemnity payments significantly decreased farm costs and increasing testing costs significantly increased government costs. Based on the results of this model, we recommend 2-month testing intervals for herds after an outbreak of bovine tuberculosis, with 2 negative whole herd tests being sufficient to lift quarantine. A prolonged test and cull program may cause a state to lose its bTB-free status during the testing period. When the cost of losing the bTB-free status is greater than $1.4 million then depopulation of farms could be preferred over a test and cull program. PMID:23953679
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, J.E.; Weathers, P.J.; McConville, F.X.
Apple pomace (the pulp residue from pressing apple juice) is an abundant waste product and presents an expensive disposal problem. A typical (50,000 gal. juice/day) apple juice company in central Massachusetts produces 100 tons of pomace per day. Some of it is used as pig feed, but it is poor quality feed because of its low protein content. Most of the pomace is hauled away (at a cost of $4/ton) and landfilled (at a cost of $10/ton). If 5% (w/w) conversion of pomace to ethanol could be achieved, the need for this company to purchase No. 6 fuel oil (1000more » gal/day) for cooking during processing would be eliminated. Our approach was to saccharify the pomace enzymatically, and then to carry out a yeast fermentation on the hydrolysate. We chose to use enzymatic hydrolysis instead of dilute acid hydrolysis in order to minimize pH control problems both in the fermentation phase and in the residue. The only chemical studies have concerned small subfractions of apple material: for example, cell walls have been analyzed but they constitute only 1 to 2% of the fresh weight of the apple (about 15 to 30% of the pomace fraction). Therefore, our major problems were: (1) to optimize hydrolysis by enzyme mixtures, using weight loss and ultimate ethanol production as optimization criteria; (2) to optimize ethanol production from the hydrolysate by judicious choice of yeast strains and fermentation conditions; and (3) achieve these optimizations consistent with minimum processing cost and energy input. We have obtained up to 5.1% (w/w) of ethanol without saccharification. We show here that hydrolysis with high levels of enzyme can enhance ethanol yield by up to 27%, to a maximum level of 6% (w/w); however, enzyme treament may be cost-effective only a low levels, for improvement of residue compaction. 3 figures, 4 tables.« less
NASA Astrophysics Data System (ADS)
Braun, Robert Joseph
The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.
Course Keeping Control of an Autonomous Boat using Low Cost Sensors
NASA Astrophysics Data System (ADS)
Yu, Zhenyu; Bao, Xinping; Nonami, Kenzo
This paper discusses the course keeping control problem for a small autonomous boat using low cost sensors. Comparing with full scale ships, a small boat is more sensitive to the environmental disturbances because of its small size and low inertia. The sensors available in the boat are a low cost GPS and a rate gyro while the commonly used compass in ship control is absent. The combined effect from disturbance, poor accuracy and significant delay in GPS measurement makes it a challenging task to achieve good performance. In this paper, we propose a simple dynamic model for the boat's horizontal motion. The model is based on the Nomoto's model and can be seen as an extension to it. The model describes the dynamics between rudder deflection and the boat's velocity vector angle while Nomoto's model reveals that between rudder deflection and the boat's yaw angle. With the proposed model there is no need for a yaw sensor for control if the boat's moving direction can be measured. GPS is a convenient device for that job. Based on the derived model, we apply mixed H2/H∞ control method to design the controller. It can guarantee the robust stability, and as the same time it can optimize the performance in the sense of H2 norm. The experimental data show that the proposed approach is proved to be effective and useful.
Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.
Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G
2015-11-17
Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.
Future ultra-speed tube-flight
NASA Astrophysics Data System (ADS)
Salter, Robert M.
1994-05-01
Future long-link, ultra-speed, surface transport systems will require electromagnetically (EM) driven and restrained vehicles operating under reduced-atmosphere in very straight tubes. Such tube-flight trains will be safe, energy conservative, pollution-free, and in a protected environment. Hypersonic (and even hyperballistic) speeds are theoretically achievable. Ultimate system choices will represent tradeoffs between amoritized capital costs (ACC) and operating costs. For example, long coasting links might employ aerodynamic lift coupled with EM restraint and drag make-up. Optimized, combined EM lift, and thrust vectors could reduce energy costs but at increased ACC. (Repulsive levitation can produce lift-over-drag l/d ratios a decade greater than aerodynamic), Alternatively, vehicle-emanated, induced-mirror fields in a conducting (aluminum sheet) road bed could reduce ACC but at substantial energy costs. Ultra-speed tube flight will demand fast-acting, high-precision sensors and computerized magnetic shimming. This same control system can maintain a magnetic 'guide way' invariant in inertial space with inertial detectors imbedded in tube structures to sense and correct for earth tremors. Ultra-speed tube flight can complete with aircraft for transit time and can provide even greater passenger convenience by single-model connections with local subways and feeder lines. Although cargo transport generally will not need to be performed at ultra speeds, such speeds may well be desirable for high throughput to optimize channel costs. Thus, a large and expensive pipeline might be replaced with small EM-driven pallets at high speeds.
Future ultra-speed tube-flight
NASA Technical Reports Server (NTRS)
Salter, Robert M.
1994-01-01
Future long-link, ultra-speed, surface transport systems will require electromagnetically (EM) driven and restrained vehicles operating under reduced-atmosphere in very straight tubes. Such tube-flight trains will be safe, energy conservative, pollution-free, and in a protected environment. Hypersonic (and even hyperballistic) speeds are theoretically achievable. Ultimate system choices will represent tradeoffs between amoritized capital costs (ACC) and operating costs. For example, long coasting links might employ aerodynamic lift coupled with EM restraint and drag make-up. Optimized, combined EM lift, and thrust vectors could reduce energy costs but at increased ACC. (Repulsive levitation can produce lift-over-drag l/d ratios a decade greater than aerodynamic), Alternatively, vehicle-emanated, induced-mirror fields in a conducting (aluminum sheet) road bed could reduce ACC but at substantial energy costs. Ultra-speed tube flight will demand fast-acting, high-precision sensors and computerized magnetic shimming. This same control system can maintain a magnetic 'guide way' invariant in inertial space with inertial detectors imbedded in tube structures to sense and correct for earth tremors. Ultra-speed tube flight can complete with aircraft for transit time and can provide even greater passenger convenience by single-model connections with local subways and feeder lines. Although cargo transport generally will not need to be performed at ultra speeds, such speeds may well be desirable for high throughput to optimize channel costs. Thus, a large and expensive pipeline might be replaced with small EM-driven pallets at high speeds.
Energy Production from Biogas: Competitiveness and Support Instruments in Latvia
NASA Astrophysics Data System (ADS)
Klāvs, G.; Kundziņa, A.; Kudrenickis, I.
2016-10-01
Use of renewable energy sources (RES) might be one of the key factors for the triple win-win: improving energy supply security, promoting local economic development, and reducing greenhouse gas emissions. The authors ex-post evaluate the impact of two main support instruments applied in 2010-2014 - the investment support (IS) and the feed-in tariff (FIT) - on the economic viability of small scale (up to 2MWel) biogas unit. The results indicate that the electricity production cost in biogas utility roughly corresponds to the historical FIT regarding electricity production using RES. However, if in addition to the FIT the IS is provided, the analysis shows that the practice of combining both the above-mentioned instruments is not optimal because too high total support (overcompensation) is provided for a biogas utility developer. In a long-term perspective, the latter gives wrong signals for investments in new technologies and also creates unequal competition in the RES electricity market. To provide optimal biogas utilisation, it is necessary to consider several options. Both on-site production of electricity and upgrading to biomethane for use in a low pressure gas distribution network are simulated by the cost estimation model. The authors' estimates show that upgrading for use in a gas distribution network should be particularly considered taking into account the already existing infrastructure and technologies. This option requires lower support compared to support for electricity production in small-scale biogas utilities.
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.
2000-01-01
The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.
NASA Astrophysics Data System (ADS)
Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Riswanto; Budiman, Rachmat
2017-07-01
Green building concept becomes important in current building life cycle to mitigate environment issues. The purpose of this paper is to optimize building construction performance towards green building premium cost, achieving green building rating tools with optimizing life cycle cost. Therefore, this study helps building stakeholder determining building fixture to achieve green building certification target. Empirically the paper collects data of green building in the Indonesian construction industry such as green building fixture, initial cost, operational and maintenance cost, and certification score achievement. After that, using value engineering method optimized green building fixture based on building function and cost aspects. Findings indicate that construction performance optimization affected green building achievement with increasing energy and water efficiency factors and life cycle cost effectively especially chosen green building fixture.
NASA Astrophysics Data System (ADS)
Chen, Yen-Sheng; Zhou, Huang-Cheng
2017-05-01
This paper presents a multiple-input-multiple-output (MIMO) antenna that has four-unit elements enabled by an isolation technique for long-term evolution (LTE) small-cell base stations. While earlier studies on MIMO base-station antennas cope with either a lower LTE band (698-960 MHz) or an upper LTE band (1710-2690 MHz), the proposed antenna meets the full LTE specification, yet it uses the maximum number of unit elements to increase channel capacity. The antenna configuration is optimized for good impedance matching and high radiation efficiency. In particular, as the spacing between unit elements is so small that severe mutual coupling occurs, we propose a simple structure with extremely low costs to enhance the isolation. By using suspended solid wires interconnecting the position having strong coupled current of two adjacent elements, an isolation enhancement of 37 dB is achieved. Although solid wires inherently aim at direct-current applications, this work successfully employs such a low-cost technique to microwave antenna development. Experimental results have validated the design guidelines and the proposed configuration, showing that antenna performances including impedance matching, isolation, radiation features, signal correlation, and channel capacity gain are highly desired for LTE small-cell base stations.
Milt, Austin W; Diebel, Matthew W; Doran, Patrick J; Ferris, Michael C; Herbert, Matthew; Khoury, Mary L; Moody, Allison T; Neeson, Thomas M; Ross, Jared; Treska, Ted; O'Hanley, Jesse R; Walter, Lisa; Wangen, Steven R; Yacobson, Eugene; McIntyre, Peter B
2018-03-08
Controlling invasive species is critical for conservation but can have unintended consequences for native species and divert resources away from other efforts. This dilemma occurs on a grand scale in the North American Great Lakes, where dams and culverts block tributary access to habitat of desirable fish species and are a lynchpin of long-standing efforts to limit ecological damage inflicted by the invasive, parasitic sea lamprey (Petromyzon marinus). Habitat restoration and sea-lamprey control create conflicting goals for managing aging infrastructure. We used optimization to minimize opportunity costs of habitat gains for 37 desirable migratory fishes that arose from restricting sea lamprey access (0-25% increase) when selecting barriers for removal under a limited budget (US$1-105 million). Imposing limits on sea lamprey habitat reduced gains in tributary access for desirable species by 15-50% relative to an unconstrained scenario. Additional investment to offset the effect of limiting sea-lamprey access resulted in high opportunity costs for 30 of 37 species (e.g., an additional US$20-80 million for lake sturgeon [Acipenser fulvescens]) and often required ≥5% increase in sea-lamprey access to identify barrier-removal solutions adhering to the budget and limiting access. Narrowly distributed species exhibited the highest opportunity costs but benefited more at less cost when small increases in sea-lamprey access were allowed. Our results illustrate the value of optimization in limiting opportunity costs when balancing invasion control against restoration benefits for diverse desirable species. Such trade-off analyses are essential to the restoration of connectivity within fragmented rivers without unleashing invaders. © 2018 Society for Conservation Biology.
Langhans, Simone D; Hermoso, Virgilio; Linke, Simon; Bunn, Stuart E; Possingham, Hugh P
2014-01-01
River rehabilitation aims to protect biodiversity or restore key ecosystem services but the success rate is often low. This is seldom because of insufficient funding for rehabilitation works but because trade-offs between costs and ecological benefits of management actions are rarely incorporated in the planning, and because monitoring is often inadequate for managers to learn by doing. In this study, we demonstrate a new approach to plan cost-effective river rehabilitation at large scales. The framework is based on the use of cost functions (relationship between costs of rehabilitation and the expected ecological benefit) to optimize the spatial allocation of rehabilitation actions needed to achieve given rehabilitation goals (in our case established by the Swiss water act). To demonstrate the approach with a simple example, we link costs of the three types of management actions that are most commonly used in Switzerland (culvert removal, widening of one riverside buffer and widening of both riversides) to the improvement in riparian zone quality. We then use Marxan, a widely applied conservation planning software, to identify priority areas to implement these rehabilitation measures in two neighbouring Swiss cantons (Aargau, AG and Zürich, ZH). The best rehabilitation plans identified for the two cantons met all the targets (i.e. restoring different types of morphological deficits with different actions) rehabilitating 80,786 m (AG) and 106,036 m (ZH) of the river network at a total cost of 106.1 Million CHF (AG) and 129.3 Million CH (ZH). The best rehabilitation plan for the canton of AG consisted of more and better connected sub-catchments that were generally less expensive, compared to its neighbouring canton. The framework developed in this study can be used to inform river managers how and where best to spend their rehabilitation budget for a given set of actions, ensures the cost-effective achievement of desired rehabilitation outcomes, and helps towards estimating total costs of long-term rehabilitation activities. Rehabilitation plans ready to be implemented may be based on additional aspects to the ones considered here, e.g., specific cost functions for rural and urban areas and/or for large and small rivers, which can simply be added to our approach. Optimizing investments in this way will ultimately increase the likelihood of on-ground success of rehabilitation activities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Development and optimization of a stove-powered thermoelectric generator
NASA Astrophysics Data System (ADS)
Mastbergen, Dan
Almost a third of the world's population still lacks access to electricity. Most of these people use biomass stoves for cooking which produce significant amounts of wasted thermal energy, but no electricity. Less than 1% of this energy in the form of electricity would be adequate for basic tasks such as lighting and communications. However, an affordable and reliable means of accomplishing this is currently nonexistent. The goal of this work is to develop a thermoelectric generator to convert a small amount of wasted heat into electricity. Although this concept has been around for decades, previous attempts have failed due to insufficient analysis of the system as a whole, leading to ineffective and costly designs. In this work, a complete design process is undertaken including concept generation, prototype testing, field testing, and redesign/optimization. Detailed component models are constructed and integrated to create a full system model. The model encompasses the stove operation, thermoelectric module, heat sinks, charging system and battery. A 3000 cycle endurance test was also conducted to evaluate the effects of operating temperature, module quality, and thermal interface quality on the generator's reliability, lifetime and cost effectiveness. The results from this testing are integrated into the system model to determine the lowest system cost in $/Watt over a five year period. Through this work the concept of a stove-based thermoelectric generator is shown to be technologically and economically feasible. In addition, a methodology is developed for optimizing the system for specific regional stove usage habits.
Energy neutral and low power wireless communications
NASA Astrophysics Data System (ADS)
Orhan, Oner
Wireless sensor nodes are typically designed to have low cost and small size. These design objectives impose restrictions on the capacity and efficiency of the transceiver components and energy storage units that can be used. As a result, energy becomes a bottleneck and continuous operation of the sensor network requires frequent battery replacements, increasing the maintenance cost. Energy harvesting and energy efficient transceiver architectures are able to overcome these challenges by collecting energy from the environment and utilizing the energy in an intelligent manner. However, due to the nature of the ambient energy sources, the amount of useful energy that can be harvested is limited and unreliable. Consequently, optimal management of the harvested energy and design of low power transceivers pose new challenges for wireless network design and operation. The first part of this dissertation is on energy neutral wireless networking, where optimal transmission schemes under different system setups and objectives are investigated. First, throughput maximization for energy harvesting two-hop networks with decode-and-forward half-duplex relays is studied. For a system with two parallel relays, various combinations of the following four transmission modes are considered: Broadcast from the source, multi-access from the relays, and successive relaying phases I and II. Next, the energy cost of the processing circuitry as well as the transmission energy are taken into account for communication over a broadband fading channel powered by an energy harvesting transmitter. Under this setup, throughput maximization, energy maximization, and transmission completion time minimization problems are studied. Finally, source and channel coding for an energy-limited wireless sensor node is investigated under various energy constraints including energy harvesting, processing and sampling costs. For each objective, optimal transmission policies are formulated as the solutions of a convex optimization problem, and the properties of these optimal policies are identified. In the second part of this thesis, low power transceiver design is considered for millimeter wave communication systems. In particular, using an additive quantization noise model, the effect of analog-digital conversion (ADC) resolution and bandwidth on the achievable rate is investigated for a multi-antenna system under a receiver power constraint. Two receiver architectures, analog and digital combining, are compared in terms of performance.
Maximizing algebraic connectivity in air transportation networks
NASA Astrophysics Data System (ADS)
Wei, Peng
In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the weight assignment can not be studied separately for the problem with operating cost constraint. Therefore a relaxed SDP method with golden section search is developed to solve both at the same time. The cluster decomposition is utilized to solve large scale networks.
Langevin, Stanley A; Bent, Zachary W; Solberg, Owen D; Curtis, Deanna J; Lane, Pamela D; Williams, Kelly P; Schoeniger, Joseph S; Sinha, Anupama; Lane, Todd W; Branda, Steven S
2013-04-01
Use of second generation sequencing (SGS) technologies for transcriptional profiling (RNA-Seq) has revolutionized transcriptomics, enabling measurement of RNA abundances with unprecedented specificity and sensitivity and the discovery of novel RNA species. Preparation of RNA-Seq libraries requires conversion of the RNA starting material into cDNA flanked by platform-specific adaptor sequences. Each of the published methods and commercial kits currently available for RNA-Seq library preparation suffers from at least one major drawback, including long processing times, large starting material requirements, uneven coverage, loss of strand information and high cost. We report the development of a new RNA-Seq library preparation technique that produces representative, strand-specific RNA-Seq libraries from small amounts of starting material in a fast, simple and cost-effective manner. Additionally, we have developed a new quantitative PCR-based assay for precisely determining the number of PCR cycles to perform for optimal enrichment of the final library, a key step in all SGS library preparation workflows.
Soenksen, L R; Kassis, T; Noh, M; Griffith, L G; Trumper, D L
2018-03-13
Precise fluid height sensing in open-channel microfluidics has long been a desirable feature for a wide range of applications. However, performing accurate measurements of the fluid level in small-scale reservoirs (<1 mL) has proven to be an elusive goal, especially if direct fluid-sensor contact needs to be avoided. In particular, gravity-driven systems used in several microfluidic applications to establish pressure gradients and impose flow remain open-loop and largely unmonitored due to these sensing limitations. Here we present an optimized self-shielded coplanar capacitive sensor design and automated control system to provide submillimeter fluid-height resolution (∼250 μm) and control of small-scale open reservoirs without the need for direct fluid contact. Results from testing and validation of our optimized sensor and system also suggest that accurate fluid height information can be used to robustly characterize, calibrate and dynamically control a range of microfluidic systems with complex pumping mechanisms, even in cell culture conditions. Capacitive sensing technology provides a scalable and cost-effective way to enable continuous monitoring and closed-loop feedback control of fluid volumes in small-scale gravity-dominated wells in a variety of microfluidic applications.
Dong, Shufang; Lu, Ke-Qian; Sun, Jian Qiao; Rudolph, Katherine
2006-03-01
In rehabilitation from neuromuscular trauma or injury, strengthening exercises are often prescribed by physical therapists to recover as much function as possible. Strengthening equipment used in clinical settings range from low-cost devices, such as sandbag weights or elastic bands to large and expensive isotonic and isokinetic devices. The low-cost devices are incapable of measuring strength gains and apply resistance based on the lowest level of torque that is produced by a muscle group. Resistance that varies with joint angle can be achieved with isokinetic devices in which angular velocity is held constant and variable torque is generated when the patient attempts to move faster than the device but are ineffective if a patient cannot generate torque rapidly. In this paper, we report the development of a versatile rehabilitation device that can be used to strengthen different muscle groups based on the torque generating capability of the muscle that changes with joint angle. The device is low cost, is smaller than other commercially available machines, and can be programmed to apply resistance that is unique to a particular patient and that will optimize strengthening. The core of the device, a damper with smart magnetorheological fluids, provides passive exercise force. A digital adaptive control is capable of regulating exercise force precisely following the muscle strengthening profile prescribed by a physical therapist. The device could be programmed with artificial intelligence to dynamically adjust the target force profile to optimize rehabilitation effects. The device provides both isometric and isokinetic strength training and can be developed into a small, low-cost device that may be capable of providing optimal strengthening in the home.
Elaboration d'une structure de collecte des matieres residuelles selon la Theorie Constructale
NASA Astrophysics Data System (ADS)
Al-Maalouf, George
Currently, more than 80% of the waste management costs are attributed to the waste collection phase. In order to reduce these costs, one current solution resides in the implementation of waste transfer stations. In these stations, at least 3 collection vehicles transfer their load into a larger hauling truck. This cost reduction is based on the principle of economy of scale applied to the transportation sector. This solution improves the efficiency of the system; nevertheless, it does not optimize it. Recent studies show that the compactor trucks used in the collection phase generate significant economic losses mainly due to the frequent stops and the transportation to transfer stations often far from the collection area. This study suggests the restructuring of the waste collection process by dividing it into two phases: the collection phase, and the transportation to the transfer station phase. To achieve this, a deterministic theory called: "the Constructal Theory" (CT) is used. The results show that starting a certain density threshold, the application of the CT minimizes energy losses in the system. In fact, the collection is optimal if it is done using a combination of low capacity vehicle to collect door to door and transfer their charge into high-capacity trucks. These trucks will then transport their load to the transfer station. To minimize the costs of labor, this study proposes the use of Cybernetic Transport System (CTS) as an automated collection vehicle to collect small amounts of waste. Finally, the optimization method proposed is part of a decentralized approach to the collection and treatment of waste. This allows the implementation of multi-process waste treatment facilities on a territory scale.
Wavelength band selection method for multispectral target detection.
Karlholm, Jörgen; Renhorn, Ingmar
2002-11-10
A framework is proposed for the selection of wavelength bands for multispectral sensors by use of hyperspectral reference data. Using the results from the detection theory we derive a cost function that is minimized by a set of spectral bands optimal in terms of detection performance for discrimination between a class of small rare targets and clutter with known spectral distribution. The method may be used, e.g., in the design of multispectral infrared search and track and electro-optical missile warning sensors, where a low false-alarm rate and a high-detection probability for detection of small targets against a clutter background are of critical importance, but the required high frame rate prevents the use of hyperspectral sensors.
Data on cost-optimal Nearly Zero Energy Buildings (NZEBs) across Europe.
D'Agostino, Delia; Parker, Danny
2018-04-01
This data article refers to the research paper A model for the cost-optimal design of Nearly Zero Energy Buildings (NZEBs) in representative climates across Europe [1]. The reported data deal with the design optimization of a residential building prototype located in representative European locations. The study focus on the research of cost-optimal choices and efficiency measures in new buildings depending on the climate. The data linked within this article relate to the modelled building energy consumption, renewable production, potential energy savings, and costs. Data allow to visualize energy consumption before and after the optimization, selected efficiency measures, costs and renewable production. The reduction of electricity and natural gas consumption towards the NZEB target can be visualized together with incremental and cumulative costs in each location. Further data is available about building geometry, costs, CO 2 emissions, envelope, materials, lighting, appliances and systems.
Stiffness optimization of non-linear elastic structures
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
2017-11-13
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Stiffness optimization of non-linear elastic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
NASA Astrophysics Data System (ADS)
Ribeiro, André S.; Almeida, Miguel
2003-11-01
We propose a model of structural organization and intercommunication between all elements of every team involved in the development of a space probe to improve efficiency. Such structure is built to minimize path between any two elements, allowing fast information flow in the structure. Structures are usually very clustered inside each task team but only the heads of departments, or occasional meetings, usually assure the links between team elements. This is responsible for a lack of information exchange between staff members of each team. We propose the establishment of permanent small working groups of staff elements from different teams, in a random but permanent basis. The elements chosen for such connections establishment can be chosen in a temporary basis, but the connections must exist permanently because only with permanent connections can information flow when needed. A few of such random connections between staff members will diminish the average path length, between any two elements of any team, for information exchange. A small world structure will emerge with low internal energy costs, which is the structure used by biological neuronal systems.
NASA Astrophysics Data System (ADS)
Ribeiro, André S.; Almeida, Miguel
2006-10-01
We propose a model of structural organization and intercommunication between all elements of every team involved in the development of a space probe to improve efficiency. Such structure is built to minimize path between any two elements, allowing fast information flow in the structure. Structures are usually very clustered inside each task team but only the heads of departments, or occasional meetings, usually assure the links between team elements. This is responsible for a lack of information exchange between staff members of each team. We propose the establishment of permanent small working groups of staff elements from different teams, in a random but permanent basis. The elements chosen for such connections establishment can be chosen on a temporary basis, but the connections must exist permanently because only with permanent connections can information flow when needed. A few of such random connections between staff members will diminish the average path length, between any two elements of any team, for information exchange. A small world structure will emerge with low internal energy costs, which is the structure used by biological neuronal systems.
NASA Technical Reports Server (NTRS)
Hinely, J. T., Jr.; Boyles, R. Q., Jr.
1979-01-01
Several candidate aircraft configurations were defined over the range of 1000 to 10,000 pounds payload and evaluated over a broad spectrum of agricultural missions. From these studies, baseline design points were selected at 3200 pounds payload for the small aircraft and 7500 pounds for the large aircraft. The small baseline aircraft utilizes a single turboprop powerplant while the large aircraft utilizes two turboprop powerplants. These configurations were optimized for wing loading, aspect ratio, and power loading to provide the best mission economics in representative missions. Wing loading of 20 lb/sq ft was selected for the small aircraft and 25 lb/sq ft for the large aircraft. Aspect ratio of 8 was selected for both aircraft. It was found that a 10% reduction in engine power from the original configurations provided improved mission economics for both aircraft by reducing the cost of the turboprop. Refined configurations incorporate a 675 HP engine in the small aircraft and two 688 HP engines in the large aircraft.
Impact of Airspace Charges on Transatlantic Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Ng, Hok K.; Linke, Florian; Chen, Neil Y.
2015-01-01
Aircraft flying over the airspace of different countries are subject to over-flight charges. These charges vary from country to country. Airspace charges, while necessary to support the communication, navigation and surveillance services, may lead to aircraft flying routes longer than wind-optimal routes and produce additional carbon dioxide and other gaseous emissions. This paper develops an optimal route between city pairs by modifying the cost function to include an airspace cost whenever an aircraft flies through a controlled airspace without landing or departing from that airspace. It is assumed that the aircraft will fly the trajectory at a constant cruise altitude and constant speed. The computationally efficient optimal trajectory is derived by solving a non-linear optimal control problem. The operational strategies investigated in this study for minimizing aircraft fuel burn and emissions include flying fuel-optimal routes and flying cost-optimal routes that may completely or partially reduce airspace charges en route. The results in this paper use traffic data for transatlantic flights during July 2012. The mean daily savings in over-flight charges, fuel cost and total operation cost during the period are 17.6 percent, 1.6 percent, and 2.4 percent respectively, along the cost- optimal trajectories. The transatlantic flights can potentially save $600,000 in fuel cost plus $360,000 in over-flight charges daily by flying the cost-optimal trajectories. In addition, the aircraft emissions can be potentially reduced by 2,070 metric tons each day. The airport pairs and airspace regions that have the highest potential impacts due to airspace charges are identified for possible reduction of fuel burn and aircraft emissions for the transatlantic flights. The results in the paper show that the impact of the variation in fuel price on the optimal routes is to reduce the difference between wind-optimal and cost-optimal routes as the fuel price increases. The additional fuel consumption is quantified using the 30 percent variation in fuel prices during March 2014 to March 2015.
van der Poel, C L; de Boer, J E; Reesink, H W; Sibinga, C T
1998-02-07
An invitational conference was held on September 11, 1996 by the Medical Advisory Commission to the Blood Transfusion Council of the Netherlands Red Cross, addressing the issues of 'maximal' versus 'optimal' safety measures for the blood supply. Invited were blood transfusion specialists, clinicians, representatives of patient interest groups, the Ministry and Inspectorate of Health and members of parliament. Transfusion experts and clinicians were found to advocate an optimal course, following strategies of evidence-based medicine, cost-benefit analyses and medical technology assessment. Patient groups depending on blood products, such as haemophilia patients would rather opt for maximal safety. Insurance companies would choose likewise, to exclude any risk if possible. Health care juridical advisers would advise to choose for optimal safety, but to reserve funds covering the differences with 'maximal safety' in case of litigation. Politicians and the general public would sooner choose for maximal rather than optimal security. The overall impression persists that however small the statistical risk may be, in the eyes of many it is unacceptable. This view is very stubborn.
Electric Propulsion System Selection Process for Interplanetary Missions
NASA Technical Reports Server (NTRS)
Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul
2008-01-01
The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension.
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Optimal Path Determination for Flying Vehicle to Search an Object
NASA Astrophysics Data System (ADS)
Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.
2018-01-01
In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.
Optimal Cooling of High Purity Germanium Spectrometers for Missions to Planets and Moons
NASA Astrophysics Data System (ADS)
Chernenko, A.; Kostenko, V.; Konev, S.; Rybkin, B.; Paschin, A.; Prokopenko, I.
2004-04-01
Gamma-ray spectrometers based on high purity germanium (HPGe) detectors are ultimately sensitive instruments for composition studies of surfaces of planets and moons. However, they require deep cooling well below 120K for the entire duration of space mission, and this challenges the feasibility of such instruments in the era of small and cost-efficient missions. In this paper we summarise our experience in the field of the theoretical and experimental studies of optimal cryogenic cooling of gamma-ray spectrometers based on HPGe detectors in order to find out how efficient, light and compact these instruments could be, provided such technologies like cryogenic heat pipe diodes (HPDs), efficient thermal insulation and efficient miniature cryocoolers are used.
Seller's dilemma due to social interactions between customers
NASA Astrophysics Data System (ADS)
Gordon, Mirta B.; Nadal, Jean-Pierre; Phan, Denis; Vannimenus, Jean
2005-10-01
In this paper, we consider a discrete choice model where heterogeneous agents are subject to mutual influences. We explore some consequences on the market's behaviour, in the simplest case of a uniform willingness to pay distribution. We exhibit a first-order phase transition in the profit optimization by the monopolist: if the social influence is strong enough, there is a regime where, if the mean willingness to pay increases, or if the production costs decrease, the optimal solution for the monopolist jumps from a solution with a high price and a small number of buyers, to a solution with a low price and a large number of buyers. Depending on the path of prices adjustments by the monopolist, simulations show hysteretic effects on the fraction of buyers.
NASA Astrophysics Data System (ADS)
Wahyuda; Santosa, Budi; Rusdiansyah, Ahmad
2018-04-01
Deregulation of the electricity market requires coordination between parties to synchronize the optimization on the production side (power station) and the transport side (transmission). Electricity supply chain presented in this article is designed to facilitate the coordination between the parties. Generally, the production side is optimized with price based dynamic economic dispatch (PBDED) model, while the transmission side is optimized with Multi-echelon distribution model. Both sides optimization are done separately. This article proposes a joint model of PBDED and multi-echelon distribution for the combined optimization of production and transmission. This combined optimization is important because changes in electricity demand on the customer side will cause changes to the production side that automatically also alter the transmission path. The transmission will cause two cost components. First, the cost of losses. Second, the cost of using the transmission network (wheeling transaction). Costs due to losses are calculated based on ohmic losses, while the cost of using transmission lines using the MW - mile method. As a result, this method is able to provide best allocation analysis for electrical transactions, as well as emission levels in power generation and cost analysis. As for the calculation of transmission costs, the Reverse MW-mile method produces a cheaper cost than the Absolute MW-mile method
Cost-effectiveness Analysis with Influence Diagrams.
Arias, M; Díez, F J
2015-01-01
Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.
Low-Cost, Portable, Multi-Wall Virtual Reality
NASA Technical Reports Server (NTRS)
Miller, Samuel A.; Misch, Noah J.; Dalton, Aaron J.
2005-01-01
Virtual reality systems make compelling outreach displays, but some such systems, like the CAVE, have design features that make their use for that purpose inconvenient. In the case of the CAVE, the equipment is difficult to disassemble, transport, and reassemble, and typically CAVEs can only be afforded by large-budget research facilities. We implemented a system like the CAVE that costs less than $30,000, weighs about 500 pounds, and fits into a fifteen-passenger van. A team of six people have unpacked, assembled, and calibrated the system in less than two hours. This cost reduction versus similar virtual-reality systems stems from the unique approach we took to stereoscopic projection. We used an assembly of optical chopper wheels and commodity LCD projectors to create true active stereo at less than a fifth of the cost of comparable active-stereo technologies. The screen and frame design also optimized portability; the frame assembles in minutes with only two fasteners, and both it and the screen pack into small bundles for easy and secure shipment.
Fuzzy control of battery chargers
NASA Astrophysics Data System (ADS)
Aldridge, Jack
1996-03-01
The increasing reliance on battery power for portable terrestrial purposes, such as portable tools, portable computers, and telecommunications, provides motivation to optimize the battery charging process with respect to speed of charging and charging cycle lifetime of the battery. Fuzzy control, implemented on a small microcomputer, optimizes charging in the presence of nonlinear effects and large uncertainty in the voltage vs. charge state characteristics for the battery. Use of a small microcontroller makes possible a small, capable, and affordable package for the charger. Microcontroller-based chargers provide improved performance by adjusting both charging voltage and charging current during the entire charging process depending on a current estimate of the state of charge of the battery. The estimate is derived from the zero-current voltage of the battery and the temperature and their rates of change. All of these quantities are uncertain due to the variation in condition between the individual cells in a battery, the rapid and nonlinear dependence of the fundamental electrochemistry on the internal temperature, and the placement of a single temperature sensor within the battery package. While monitoring the individual cell voltages and temperatures would be desirable, cost and complexity considerations preclude the practice. NASA has developed considerable technology in batteries for supplying significant amounts of power for spacecraft and in fuzzy control techniques for the space applications. In this paper, we describe how we are using both technologies to build an optimal charger prototype as a precursor to a commercial version.
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, W.L.; Pines, H.S.; Silvester, L.F.
1978-03-01
A new heat exchanger program, SIZEHX, is described. This program allows single step multiparameter cost optimizations on single phase or supercritical exchanger arrays with variable properties and arbitrary fouling for a multitude of matrix configurations and fluids. SIZEHX uses a simplified form of Tinker's method for characterization of shell side performance; the Starling modified BWR equation for thermodynamic properties of hydrocarbons; and transport properties developed by NBS. Results of four parameter cost optimizations on exchangers for specific geothermal applications are included. The relative mix of capital cost, pumping cost, and brine cost ($/Btu) is determined for geothermal exchangers illustrating themore » invariant nature of the optimal cost distribution for fixed unit costs.« less
Optimizing Barrier Removal to Restore Connectivity in Utah's Weber Basin
NASA Astrophysics Data System (ADS)
Kraft, M.; Null, S. E.
2016-12-01
Instream barriers, such as dams, culverts and diversions are economically important for water supply, but negatively affect river ecosystems and disrupt hydrologic processes. Removal of uneconomical and aging in-stream barriers to improve habitat connectivity is increasingly used to restore river connectivity. Most past barrier removal projects focused on individual barriers using a score-and-rank technique, ignoring cumulative change from multiple, spatially-connected barrier removals. Similarly, most water supply models optimize either human water use or aquatic connectivity, failing to holistically represent human and environmental benefits. In this study, a dual objective optimization model identified in-stream barriers that impede aquatic habitat connectivity for trout, using streamflow, temperature, and channel gradient as indicators of aquatic habitat suitability. Water scarcity costs are minimized using agricultural and urban economic penalty functions to incorporate water supply benefits and a budget monetizes costs of removing small barriers like culverts and road crossings. The optimization model developed is applied to a case study in Utah's Weber basin to prioritize removal of the most environmentally harmful barriers, while maintaining human water uses. The dual objective solution basis was developed to quantify and graphically visualize tradeoffs between connected quality-weighted habitat for Bonneville cutthroat trout and economic water uses. Modeled results include a spectrum of barrier removal alternatives based on budget and quality-weighted reconnected habitat that can be communicated with local stakeholders. This research will help prioritize barrier removals and future restoration decisions. The modeling approach expands current barrier removal optimization methods by explicitly including economic and environmental water uses.
Designing optimal greenhouse gas monitoring networks for Australia
NASA Astrophysics Data System (ADS)
Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.
2016-01-01
Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.
Development of Fully Automated Low-Cost Immunoassay System for Research Applications.
Wang, Guochun; Das, Champak; Ledden, Bradley; Sun, Qian; Nguyen, Chien
2017-10-01
Enzyme-linked immunosorbent assay (ELISA) automation for routine operation in a small research environment would be very attractive. A portable fully automated low-cost immunoassay system was designed, developed, and evaluated with several protein analytes. It features disposable capillary columns as the reaction sites and uses real-time calibration for improved accuracy. It reduces the overall assay time to less than 75 min with the ability of easy adaptation of new testing targets. The running cost is extremely low due to the nature of automation, as well as reduced material requirements. Details about system configuration, components selection, disposable fabrication, system assembly, and operation are reported. The performance of the system was initially established with a rabbit immunoglobulin G (IgG) assay, and an example of assay adaptation with an interleukin 6 (IL6) assay is shown. This system is ideal for research use, but could work for broader testing applications with further optimization.
LSSA large area silicon sheet task continuous Czochralski process development
NASA Technical Reports Server (NTRS)
Rea, S. N.
1978-01-01
A Czochralski crystal growing furnace was converted to a continuous growth facility by installation of a premelter to provide molten silicon flow into the primary crucible. The basic furnace is operational and several trial crystals were grown in the batch mode. Numerous premelter configurations were tested both in laboratory-scale equipment as well as in the actual furnace. The best arrangement tested to date is a vertical, cylindrical graphite heater containing small fused silicon test tube liner in which the incoming silicon is melted and flows into the primary crucible. Economic modeling of the continuous Czochralski process indicates that for 10 cm diameter crystal, 100 kg furnace runs of four or five crystals each are near-optimal. Costs tend to asymptote at the 100 kg level so little additional cost improvement occurs at larger runs. For these conditions, crystal cost in equivalent wafer area of around $20/sq m exclusive of polysilicon and slicing was obtained.
NASA Astrophysics Data System (ADS)
Dharmaseelan, Anoop; Adistambha, Keyne D.
2015-05-01
Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.
Optimal Sizing of Energy Storage for Community Microgrids Considering Building Thermal Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Li, Zhi; Starke, Michael R.
This paper proposes an optimization model for the optimal sizing of energy storage in community microgrids considering the building thermal dynamics and customer comfort preference. The proposed model minimizes the annualized cost of the community microgrid, including energy storage investment, purchased energy cost, demand charge, energy storage degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation. The decision variables are the power and energy capacity of invested energy storage. In particular, we assume the heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently by the microgrid central controller while maintainingmore » the indoor temperature in the comfort range set by customers. For this purpose, the detailed thermal dynamic characteristics of buildings have been integrated into the optimization model. Numerical simulation shows significant cost reduction by the proposed model. The impacts of various costs on the optimal solution are investigated by sensitivity analysis.« less
Discrete-time Markovian-jump linear quadratic optimal control
NASA Technical Reports Server (NTRS)
Chizeck, H. J.; Willsky, A. S.; Castanon, D.
1986-01-01
This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.
Application of synthetic biology for production of chemicals in yeast Saccharomyces cerevisiae.
Li, Mingji; Borodina, Irina
2015-02-01
Synthetic biology and metabolic engineering enable generation of novel cell factories that efficiently convert renewable feedstocks into biofuels, bulk, and fine chemicals, thus creating the basis for biosustainable economy independent on fossil resources. While over a hundred proof-of-concept chemicals have been made in yeast, only a very small fraction of those has reached commercial-scale production so far. The limiting factor is the high research cost associated with the development of a robust cell factory that can produce the desired chemical at high titer, rate, and yield. Synthetic biology has the potential to bring down this cost by improving our ability to predictably engineer biological systems. This review highlights synthetic biology applications for design, assembly, and optimization of non-native biochemical pathways in baker's yeast Saccharomyces cerevisiae We describe computational tools for the prediction of biochemical pathways, molecular biology methods for assembly of DNA parts into pathways, and for introducing the pathways into the host, and finally approaches for optimizing performance of the introduced pathways. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.
Multicategory nets of single-layer perceptrons: complexity and sample-size issues.
Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras
2010-05-01
The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
Imani, Somaye; Niksokhan, Mohammad Hossein; Jamshidi, Shervin; Abbaspour, Karim C
2017-07-01
The economic concerns of low-income farmers are barriers to nutrient abatement policies for eutrophication control in surface waters. This study brings up a perspective that focuses on integrating multiple-pollutant discharge permit markets with farm management practices. This aims to identify a more economically motivated waste load allocation (WLA) for non-point sources (NPS). For this purpose, we chose the small basin of Zrebar Lake in western Iran and used the soil and water assessment tool (SWAT) for modeling. The export coefficients (ECs), effectiveness of best management practices (BMPs), and crop yields were calculated by using this software. These variables show that low-income farmers can hardly afford to invest in BMPs in a typical WLA. Conversely, a discharge permit market presents a more cost-effective solution. This method saves 64% in total abatement costs and motivates farmers by offering economic benefits. A market analysis revealed that nitrogen permits mostly cover the trades with the optimal price ranging from $6 to $30 per kilogram. However, phosphorous permits are limited for trading, and their price exceeds $60 per kilogram. This approach also emphasizes the establishment of a regional institution for market monitoring, dynamic pricing, fair fund reallocation, giving information to participants, and ensuring their income. By these sets of strategies, a WLA on the brink of failure can turn into a cost-effective and sustainable policy for eutrophication control in small basins.
NASA Astrophysics Data System (ADS)
Zhu, Kai-Jian; Li, Jun-Feng; Baoyin, He-Xi
2010-01-01
In case of an emergency like the Wenchuan earthquake, it is impossible to observe a given target on earth by immediately launching new satellites. There is an urgent need for efficient satellite scheduling within a limited time period, so we must find a way to reasonably utilize the existing satellites to rapidly image the affected area during a short time period. Generally, the main consideration in orbit design is satellite coverage with the subsatellite nadir point as a standard of reference. Two factors must be taken into consideration simultaneously in orbit design, i.e., the maximum observation coverage time and the minimum orbital transfer fuel cost. The local time of visiting the given observation sites must satisfy the solar radiation requirement. When calculating the operational orbit elements as optimal parameters to be evaluated, we obtain the minimum objective function by comparing the results derived from the primer vector theory with those derived from the Hohmann transfer because the operational orbit for observing the disaster area with impulse maneuvers is considered in this paper. The primer vector theory is utilized to optimize the transfer trajectory with three impulses and the Hohmann transfer is utilized for coplanar and small inclination of non-coplanar cases. Finally, we applied this method in a simulation of the rescue mission at Wenchuan city. The results of optimizing orbit design with a hybrid PSO and DE algorithm show that the primer vector and Hohmann transfer theory proved to be effective methods for multi-object orbit optimization.
Moyle, Richard L.; Carvalhais, Lilia C.; Pretorius, Lara-Simone; Nowak, Ekaterina; Subramaniam, Gayathery; Dalton-Morgan, Jessica; Schenk, Peer M.
2017-01-01
Studies investigating the action of small RNAs on computationally predicted target genes require some form of experimental validation. Classical molecular methods of validating microRNA action on target genes are laborious, while approaches that tag predicted target sequences to qualitative reporter genes encounter technical limitations. The aim of this study was to address the challenge of experimentally validating large numbers of computationally predicted microRNA-target transcript interactions using an optimized, quantitative, cost-effective, and scalable approach. The presented method combines transient expression via agroinfiltration of Nicotiana benthamiana leaves with a quantitative dual luciferase reporter system, where firefly luciferase is used to report the microRNA-target sequence interaction and Renilla luciferase is used as an internal standard to normalize expression between replicates. We report the appropriate concentration of N. benthamiana leaf extracts and dilution factor to apply in order to avoid inhibition of firefly LUC activity. Furthermore, the optimal ratio of microRNA precursor expression construct to reporter construct and duration of the incubation period post-agroinfiltration were determined. The optimized dual luciferase assay provides an efficient, repeatable and scalable method to validate and quantify microRNA action on predicted target sequences. The optimized assay was used to validate five predicted targets of rice microRNA miR529b, with as few as six technical replicates. The assay can be extended to assess other small RNA-target sequence interactions, including assessing the functionality of an artificial miRNA or an RNAi construct on a targeted sequence. PMID:28979287
High Efficiency Turbine Generator for Instream Electric Power Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelecy, Patrick M.
Concerns over global warming due to carbon emissions have spurred an interest in such renewable energy alternatives as hydroelectric, wind, solar, geothermal, and biomass. Of all of these, hydroelectric power offers perhaps the greatest potential for supplying a significant portion of our nation's energy needs. To realize this potential, however, this technology needs to expand beyond traditional dam based installations (for which there are relatively few suitable remaining sites) into the vast number of open flow installations potentially available in rivers, canals, tidal streams and open ocean sites. To help promote this expansion, this project focused on the development ofmore » an advanced, vertical axis, hydrokinetic power generator (HPG) technology for open flow applications. Two key features investigated for this were (1) an active blade pitch control system that provides independent control of the turbine blades, and (2) a low-profile, low-speed, high-torque electric generator suitable for direct coupling to the turbine (no gearbox). Both systems are based on a unique, disk-shape, high performance electromechanical design that is potentially low cost, compact, light-weight, and efficient. Blade actuator and generator designs were developed and optimized for this application. They were then incorporated into several HPG designs based on an optimized H-Darrieus turbine structure that was also developed. Three HPG sizes were explored (10kW, 25kW and 50kW) to assess scalability. For each size, two HPG versions were developed: one with the electric generator mounted above the turbine and one with it integrated into the turbine body. Each provided certain benefits and illustrated the versatility of this technology. Design and performance specifications were calculated and comparisons were made with commercial hydrokinetic turbine products. Based on these comparisons, this technology was smaller and significantly lighter (by up to 50%) in the higher power ratings. A preliminary cost analysis was performed for these designs. Costs were determined in prototyping (1-10), small (100), and medium (1000) production volumes. Installed costs were then estimated and compared to wind and solar energy products of similar rating. Based on that comparison, the installed cost of this technology is expected to be similar in small production volumes and lower in medium (or greater) production volumes. Finally, the levelized cost of energy (LCOE) was calculated for the 50kW HPG and compared to other renewables (solar, wind, small and large scale hydro) based on published data. The LCOE estimated for this system ($31/MWh-48/MWh dollars) was found to be quite competitive with other renewables, especially if higher production volumes can be achieved. Based on these findings, this technology should be successful if commercialized and promote the expansion of river-based power generation.« less
Position reporting system using small satellites
NASA Technical Reports Server (NTRS)
Pavesi, B.; Rondinelli, G.; Graziani, F.
1990-01-01
A system able to provide position reporting and monitoring services for mobile applications represents a natural complement to the Global Positioning System (GPS) navigation system. The system architecture is defined on the basis of the communications requirements derived by user needs, allowing maximum flexibility in the use of channel capacity, and a very simple and low cost terminal. The payload is sketched, outlining the block modularity and the use of qualified hardware. The global system capacity is also derived. The spacecraft characteristics are defined on the basis of the payload requirements. A small bus optimized for Ariane IV, Delta II vehicles and based on the modularity concept is presented. The design takes full advantage of each launcher with a common basic bus or bus elements for a mass production.
Cost and fuel consumption per nautical mile for two engine jet transports using OPTIM and TRAGEN
NASA Technical Reports Server (NTRS)
Wiggs, J. F.
1982-01-01
The cost and fuel consumption per nautical mile for two engine jet transports are computed using OPTIM and TRAGEN. The savings in fuel and direct operating costs per nautical mile for each of the different types of optimal trajectories over a standard profile are shown.
Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong
2014-12-01
In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.
NASA Astrophysics Data System (ADS)
Widhiarso, Wahyu; Rosyidi, Cucuk Nur
2018-02-01
Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Lagorce, David; Pencheva, Tania; Villoutreix, Bruno O; Miteva, Maria A
2009-11-13
Discovery of new bioactive molecules that could enter drug discovery programs or that could serve as chemical probes is a very complex and costly endeavor. Structure-based and ligand-based in silico screening approaches are nowadays extensively used to complement experimental screening approaches in order to increase the effectiveness of the process and facilitating the screening of thousands or millions of small molecules against a biomolecular target. Both in silico screening methods require as input a suitable chemical compound collection and most often the 3D structure of the small molecules has to be generated since compounds are usually delivered in 1D SMILES, CANSMILES or in 2D SDF formats. Here, we describe the new open source program DG-AMMOS which allows the generation of the 3D conformation of small molecules using Distance Geometry and their energy minimization via Automated Molecular Mechanics Optimization. The program is validated on the Astex dataset, the ChemBridge Diversity database and on a number of small molecules with known crystal structures extracted from the Cambridge Structural Database. A comparison with the free program Balloon and the well-known commercial program Omega generating the 3D of small molecules is carried out. The results show that the new free program DG-AMMOS is a very efficient 3D structure generator engine. DG-AMMOS provides fast, automated and reliable access to the generation of 3D conformation of small molecules and facilitates the preparation of a compound collection prior to high-throughput virtual screening computations. The validation of DG-AMMOS on several different datasets proves that generated structures are generally of equal quality or sometimes better than structures obtained by other tested methods.
Tasnim, Farah; Phan, Derek; Toh, Yi-Chin; Yu, Hanry
2015-11-01
Significant efforts have been invested into the differentiation of stem cells into functional hepatocyte-like cells that can be used for cell therapy, disease modeling and drug screening. Most of these efforts have been concentrated on the use of growth factors to recapitulate developmental signals under in vitro conditions. Using small molecules instead of growth factors would provide an attractive alternative since small molecules are cell-permeable and cheaper than growth factors. We have developed a protocol for the differentiation of human embryonic stem cells into hepatocyte-like cells using a predominantly small molecule-based approach (SM-Hep). This 3 step differentiation strategy involves the use of optimized concentrations of LY294002 and bromo-indirubin-3'-oxime (BIO) for the generation of definitive endoderm; sodium butyrate and dimethyl sulfoxide (DMSO) for the generation of hepatoblasts and SB431542 for differentiation into hepatocyte-like cells. Activin A is the only growth factor required in this protocol. Our results showed that SM-Hep were morphologically and functionally similar or better compared to the hepatocytes derived from the growth-factor induced differentiation (GF-Hep) in terms of expression of hepatic markers, urea and albumin production and cytochrome P450 (CYP1A2 and CYP3A4) activities. Cell viability assays following treatment with paradigm hepatotoxicants Acetaminophen, Chlorpromazine, Diclofenac, Digoxin, Quinidine and Troglitazone showed that their sensitivity to these drugs was similar to human primary hepatocytes (PHHs). Using SM-Hep would result in 67% and 81% cost reduction compared to GF-Hep and PHHs respectively. Therefore, SM-Hep can serve as a robust and cost effective replacement for PHHs for drug screening and development. Copyright © 2015 Elsevier Ltd. All rights reserved.
Feasibility of a low-cost sounding rockoon platform
NASA Astrophysics Data System (ADS)
Okninski, Adam; Raurell, Daniel Sors; Mitre, Alberto Rodriguez
2016-10-01
This paper presents the results of analyses and simulations for the design of a small sounding platform, dedicated to conducting scientific atmospheric research and capable of reaching the von Kármán line by means of a rocket launched from it. While recent private initiatives have opted for the air launch concept to send small payloads to Low Earth Orbit, several historical projects considered the use of balloons as the first stage of orbital and suborbital platforms, known as rockoons. Both of these approaches enable the minimization of drag losses. This paper addresses the issue of utilizing stratospheric balloons as launch platforms to conduct sub-orbital rocket flights. Research and simulations have been conducted to demonstrate these capabilities and feasibility. A small sounding solid propulsion rocket using commercially-off-the-shelf hardware is proposed. Its configuration and design are analyzed with special attention given to the propulsion system and its possible mission-orientated optimization. The cost effectiveness of this approach is discussed. Performance calculation outcomes are shown. Additionally, sensitivity study results for different design parameters are given. Minimum mass rocket configurations for various payload requirements are presented. The ultimate aim is to enhance low-cost experimentation maintaining high mobility of the system and simplicity of operations. An easier and more affordable access to a space-like environment can be achieved with this system, thus allowing for widespread outreach of space science and technology knowledge. This project is based on earlier experience of the authors in LEEM Association of the Technical University of Madrid and the Polish Small Sounding Rocket Program developed at the Institute of Aviation and Warsaw University of Technology in Poland.
Assessment of the availability of technology for trauma care in Nepal.
Shah, Mihir Tejanshu; Bhattarai, Suraj; Lamichhane, Norman; Joshi, Arpita; LaBarre, Paul; Joshipura, Manjul; Mock, Charles
2015-09-01
We sought to assess the availability of technology-related equipment for trauma care in Nepal and to identify factors leading to optimal availability as well as deficiencies. We also sought to identify potential solutions addressing the deficits in terms of health systems management and product development. Thirty-two items for large hospitals and sixteen items for small hospitals related to the technological aspect of trauma care were selected from the World Health Organization's Guidelines for Essential Trauma Care for the current study. Fifty-six small and 29 large hospitals were assessed for availability of these items in the study area. Site visits included direct inspection and interviews with administrative, clinical, and bioengineering staff. Deficiencies of many specific items were noted, including many that were inexpensive and which could have been easily supplied. Shortage of electricity was identified as a major infrastructural deficiency present in all parts of the country. Deficiencies of pulse oximetry and ventilators were observed in most hospitals, attributed in most part to frequent breakdowns and long downtimes because of lack of vendor-based service contracts or in-house maintenance staff. Sub-optimal oxygen supply was identified as a major and frequent deficiency contributing to disruption of services. All equipment was imported except for a small percent of suction machines and haemoglobinometers. The study identified a range of items which were deficient and whose availability could be improved cost-effectively and sustainably by better planning and organisation. The electricity deficit has been dealt with successfully in a few hospitals via direct feeder lines and installation of solar panels; wider implementation of these methods would help solve a large portion of the technological deficiencies. From a health systems management view-point, strengthening procurement and stocking of low cost items especially in remote parts of the country is needed. From a product development view-point, there is a need for robust pulse-oximeters and ventilators that are lower cost and which have longer durability and less need for repairs. Increasing capabilities for local manufacture is another potential method to increase availability of a range of equipment and spare parts. Copyright © 2015 Elsevier Ltd. All rights reserved.
Advanced Structural Optimization Under Consideration of Cost Tracking
NASA Astrophysics Data System (ADS)
Zell, D.; Link, T.; Bickelmaier, S.; Albinger, J.; Weikert, S.; Cremaschi, F.; Wiegand, A.
2014-06-01
In order to improve the design process of launcher configurations in the early development phase, the software Multidisciplinary Optimization (MDO) was developed. The tool combines different efficient software tools such as Optimal Design Investigations (ODIN) for structural optimizations, Aerospace Trajectory Optimization Software (ASTOS) for trajectory and vehicle design optimization for a defined payload and mission.The present paper focuses to the integration and validation of ODIN. ODIN enables the user to optimize typical axis-symmetric structures by means of sizing the stiffening designs concerning strength and stability while minimizing the structural mass. In addition a fully automatic finite element model (FEM) generator module creates ready-to-run FEM models of a complete stage or launcher assembly.Cost tracking respectively future improvements concerning cost optimization are indicated.
DESIGN OF A SIMPLE SLOW COOLING DEVICE FOR CRYOPRESERVATION OF SMALL BIOLOGICAL SAMPLES.
de Paz, Leonardo Juan; Robert, Maria Celeste; Graf, Daniel Adolfo; Guibert, Edgardo Elvio; Rodriguez, Joaquin Valentin
2015-01-01
Slow cooling is a cryopreservation methodology where samples are cooled to its storage temperature at controlled cooling rates. Design, construction and evaluation of a simple and low cost device for slow cooling of small biological samples. The device was constructed based on Pye's freezer idea. A Dewar flask filled with liquid nitrogen was used as heat sink and a methanol bath containing the sample was cooled at constant rates using copper bars as heat conductor. Sample temperature may be lowered at controlled cooling rate (ranging from 0.4°C/min to 6.0°C/min) down to ~-60°C, where it could be conserved at lower temperatures. An example involving the cryopreservation of Neuro-2A cell line showed a marked influence of cooling rate over post preservation cell viability with optimal values between 2.6 and 4.6°C/min. The cooling device proved to be a valuable alternative to more expensive systems allowing the assessment of different cooling rates to evaluate the optimal condition for cryopreservation of such samples.
A Decision-making Model for a Two-stage Production-delivery System in SCM Environment
NASA Astrophysics Data System (ADS)
Feng, Ding-Zhong; Yamashiro, Mitsuo
A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witt, Adam M; Smith, Brennan T
Small hydropower plants supply reliable renewable energy to the grid, though few new plants have been developed in the Unites States over the past few decades due to complex environmental challenges and poor project economics. This paper describes the current landscape of small hydropower development, and introduces a new approach to facility design that co-optimizes the extraction of hydroelectric power from a stream with other important environmental functions such as fish, sediment, and recreational passage. The approach considers hydropower facilities as an integrated system of standardized interlocking modules, designed to sustain stream functions, generate power, and interface with the streambed.more » It is hypothesized that this modular eco-design approach, when guided by input from the broader small hydropower stakeholder community, can lead to cost savings across the facility, reduced licensing and approval timelines, and ultimately, to enhanced resiliency through improved environmental performance over the lifetime of the project.« less
NASA Astrophysics Data System (ADS)
Helbing, Dirk; Ammoser, Hendrik; Kühnert, Christian
2006-04-01
In this paper we discuss the problem of information losses in organizations and how they depend on the organization network structure. Hierarchical networks are an optimal organization structure only when the failure rate of nodes or links is negligible. Otherwise, redundant information links are important to reduce the risk of information losses and the related costs. However, as redundant information links are expensive, the optimal organization structure is not a fully connected one. It rather depends on the failure rate. We suggest that sidelinks and temporary, adaptive shortcuts can improve the information flows considerably by generating small-world effects. This calls for modified organization structures to cope with today's challenges of businesses and administrations, in particular, to successfully respond to crises or disasters.
Directed differentiation of embryonic stem cells using a bead-based combinatorial screening method.
Tarunina, Marina; Hernandez, Diana; Johnson, Christopher J; Rybtsov, Stanislav; Ramathas, Vidya; Jeyakumar, Mylvaganam; Watson, Thomas; Hook, Lilian; Medvinsky, Alexander; Mason, Chris; Choo, Yen
2014-01-01
We have developed a rapid, bead-based combinatorial screening method to determine optimal combinations of variables that direct stem cell differentiation to produce known or novel cell types having pre-determined characteristics. Here we describe three experiments comprising stepwise exposure of mouse or human embryonic cells to 10,000 combinations of serum-free differentiation media, through which we discovered multiple novel, efficient and robust protocols to generate a number of specific hematopoietic and neural lineages. We further demonstrate that the technology can be used to optimize existing protocols in order to substitute costly growth factors with bioactive small molecules and/or increase cell yield, and to identify in vitro conditions for the production of rare developmental intermediates such as an embryonic lymphoid progenitor cell that has not previously been reported.
An overview of molecular acceptors for organic solar cells
NASA Astrophysics Data System (ADS)
Hudhomme, Piétrick
2013-07-01
Organic solar cells (OSCs) have gained serious attention during the last decade and are now considered as one of the future photovoltaic technologies for low-cost power production. The first dream of attaining 10% of power coefficient efficiency has now become a reality thanks to the development of new materials and an impressive work achieved to understand, control and optimize structure and morphology of the device. But most of the effort devoted to the development of new materials concerned the optimization of the donor material, with less attention for acceptors which to date remain dominated by fullerenes and their derivatives. This short review presents the progress in the use of non-fullerene small molecules and fullerene-based acceptors with the aim of evaluating the challenge for the next generation of acceptors in organic photovoltaics.
Optimal shielding design for minimum materials cost or mass
Woolley, Robert D.
2015-12-02
The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...
2017-10-10
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
2018-01-28
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
Synthesizing epidemiological and economic optima for control of immunizing infections.
Klepac, Petra; Laxminarayan, Ramanan; Grenfell, Bryan T
2011-08-23
Epidemic theory predicts that the vaccination threshold required to interrupt local transmission of an immunizing infection like measles depends only on the basic reproductive number and hence transmission rates. When the search for optimal strategies is expanded to incorporate economic constraints, the optimum for disease control in a single population is determined by relative costs of infection and control, rather than transmission rates. Adding a spatial dimension, which precludes local elimination unless it can be achieved globally, can reduce or increase optimal vaccination levels depending on the balance of costs and benefits. For weakly coupled populations, local optimal strategies agree with the global cost-effective strategy; however, asymmetries in costs can lead to divergent control optima in more strongly coupled systems--in particular, strong regional differences in costs of vaccination can preclude local elimination even when elimination is locally optimal. Under certain conditions, it is locally optimal to share vaccination resources with other populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
Optimization of fixed-range trajectories for supersonic transport aircraft
NASA Astrophysics Data System (ADS)
Windhorst, Robert Dennis
1999-11-01
This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.
Analysis of efficiency of waste reverse logistics for recycling.
Veiga, Marcelo M
2013-10-01
Brazil is an agricultural country with the highest pesticide consumption in the world. Historically, pesticide packaging has not been disposed of properly. A federal law requires the chemical industry to provide proper waste management for pesticide-related products. A reverse logistics program was implemented, which has been hailed a great success. This program was designed to target large rural communities, where economy of scale can take place. Over the last 10 years, the recovery rate has been very poor in most small rural communities. The objective of this study was to analyze the case of this compulsory reverse logistics program for pesticide packaging under the recent Brazilian Waste Management Policy, which enforces recycling as the main waste management solution. This results of this exploratory research indicate that despite its aggregate success, the reverse logistics program is not efficient for small rural communities. It is not possible to use the same logistic strategy for small and large communities. The results also indicate that recycling might not be the optimal solution, especially in developing countries with unsatisfactory recycling infrastructure and large transportation costs. Postponement and speculation strategies could be applied for improving reverse logistics performance. In most compulsory reverse logistics programs, there is no economical solution. Companies should comply with the law by ranking cost-effective alternatives.
CNV detection method optimized for high-resolution arrayCGH by normality test.
Ahn, Jaegyoon; Yoon, Youngmi; Park, Chihyun; Park, Sanghyun
2012-04-01
High-resolution arrayCGH platform makes it possible to detect small gains and losses which previously could not be measured. However, current CNV detection tools fitted to early low-resolution data are not applicable to larger high-resolution data. When CNV detection tools are applied to high-resolution data, they suffer from high false-positives, which increases validation cost. Existing CNV detection tools also require optimal parameter values. In most cases, obtaining these values is a difficult task. This study developed a CNV detection algorithm that is optimized for high-resolution arrayCGH data. This tool operates up to 1500 times faster than existing tools on a high-resolution arrayCGH of whole human chromosomes which has 42 million probes whose average length is 50 bases, while preserving false positive/negative rates. The algorithm also uses a normality test, thereby removing the need for optimal parameters. To our knowledge, this is the first formulation for CNV detecting problems that results in a near-linear empirical overall complexity for real high-resolution data. Copyright © 2012 Elsevier Ltd. All rights reserved.
Dávid-Barrett, T.; Dunbar, R. I. M.
2013-01-01
Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses. PMID:23804623
Superconducting light generator for large offshore wind turbines
NASA Astrophysics Data System (ADS)
Sanz, S.; Arlaban, T.; Manzanas, R.; Tropeano, M.; Funke, R.; Kováč, P.; Yang, Y.; Neumann, H.; Mondesert, B.
2014-05-01
Offshore wind market demands higher power rate and reliable turbines in order to optimize capital and operational cost. These requests are difficult to overcome with conventional generator technologies due to a significant weight and cost increase with the scaling up. Thus superconducting materials appears as a prominent solution for wind generators, based on their capacity to held high current densities with very small losses, which permits to efficiently replace copper conductors mainly in the rotor field coils. However the state-of-the-art superconducting generator concepts still seem to be expensive and technically challenging for the marine environment. This paper describes a 10 MW class novel direct drive superconducting generator, based on MgB2 wires and a modular cryogen free cooling system, which has been specifically designed for the offshore wind industry needs.
ABLE project: Development of an advanced lead-acid storage system for autonomous PV installations
NASA Astrophysics Data System (ADS)
Lemaire-Potteau, Elisabeth; Vallvé, Xavier; Pavlov, Detchko; Papazov, G.; Borg, Nico Van der; Sarrau, Jean-François
In the advanced battery for low-cost renewable energy (ABLE) project, the partners have developed an advanced storage system for small and medium-size PV systems. It is composed of an innovative valve-regulated lead-acid (VRLA) battery, optimised for reliability and manufacturing cost, and an integrated regulator, for optimal battery management and anti-fraudulent use. The ABLE battery performances are comparable to flooded tubular batteries, which are the reference in medium-size PV systems. The ABLE regulator has several innovative features regarding energy management and modular series/parallel association. The storage system has been validated by indoor, outdoor and field tests, and it is expected that this concept could be a major improvement for large-scale implementation of PV within the framework of national rural electrification schemes.
Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud
2014-01-01
During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636
Progress in amorphous silicon based large-area multijunction modules
NASA Astrophysics Data System (ADS)
Carlson, D. E.; Arya, R. R.; Bennett, M.; Chen, L.-F.; Jansen, K.; Li, Y.-M.; Maley, N.; Morris, J.; Newton, J.; Oswald, R. S.; Rajan, K.; Vezzetti, D.; Willing, F.; Yang, L.
1996-01-01
Solarex, a business unit of Amoco/Enron Solar, is scaling up its a-Si:H/a-SiGe:H tandem device technology for the production of 8 ft2 modules. The current R&D effort is focused on improving the performance, reliability and cost-effectiveness of the tandem junction technology by systematically optimizing the materials and interfaces in small-area single- and tandem junction cells. Average initial conversion efficiencies of 8.8% at 85% yield have been obtained in pilot production runs with 4 ft2 tandem modules.
Optimization of Automobile Crush Characteristics: Technical Report
DOT National Transportation Integrated Search
1975-10-01
A methodology is developed for the evaluation and optimization of societal costs of two-vehicle automobile collisions. Costs considered in a Figure of Merit include costs of injury/mortality, occupant compartment penetration, collision damage repairs...
Two-step optimization of pressure and recovery of reverse osmosis desalination process.
Liang, Shuang; Liu, Cui; Song, Lianfa
2009-05-01
Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed.
Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang
2018-06-01
Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.
Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System
NASA Astrophysics Data System (ADS)
Huang, Long; Feng, Xiao; Chu, Khim H.
2010-11-01
Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.
Renewable Energy Resources Portfolio Optimization in the Presence of Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behboodi, Sahand; Chassin, David P.; Crawford, Curran
In this paper we introduce a simple cost model of renewable integration and demand response that can be used to determine the optimal mix of generation and demand response resources. The model includes production cost, demand elasticity, uncertainty costs, capacity expansion costs, retirement and mothballing costs, and wind variability impacts to determine the hourly cost and revenue of electricity delivery. The model is tested on the 2024 planning case for British Columbia and we find that cost is minimized with about 31% renewable generation. We also find that demand responsive does not have a significant impact on cost at themore » hourly level. The results suggest that the optimal level of renewable resource is not sensitive to a carbon tax or demand elasticity, but it is highly sensitive to the renewable resource installation cost.« less
NASA Astrophysics Data System (ADS)
Aydogdu, Ibrahim
2017-03-01
In this article, a new version of a biogeography-based optimization algorithm with Levy flight distribution (LFBBO) is introduced and used for the optimum design of reinforced concrete cantilever retaining walls under seismic loading. The cost of the wall is taken as an objective function, which is minimized under the constraints implemented by the American Concrete Institute (ACI 318-05) design code and geometric limitations. The influence of peak ground acceleration (PGA) on optimal cost is also investigated. The solution of the problem is attained by the LFBBO algorithm, which is developed by adding Levy flight distribution to the mutation part of the biogeography-based optimization (BBO) algorithm. Five design examples, of which two are used in literature studies, are optimized in the study. The results are compared to test the performance of the LFBBO and BBO algorithms, to determine the influence of the seismic load and PGA on the optimal cost of the wall.
Photovoltaic design optimization for terrestrial applications
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1978-01-01
As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.
Optimal synthesis and design of the number of cycles in the leaching process for surimi production.
Reinheimer, M Agustina; Scenna, Nicolás J; Mussati, Sergio F
2016-12-01
Water consumption required during the leaching stage in the surimi manufacturing process strongly depends on the design and the number and size of stages connected in series for the soluble protein extraction target, and it is considered as the main contributor to the operating costs. Therefore, the optimal synthesis and design of the leaching stage is essential to minimize the total annual cost. In this study, a mathematical optimization model for the optimal design of the leaching operation is presented. Precisely, a detailed Mixed Integer Nonlinear Programming (MINLP) model including operating and geometric constraints was developed based on our previous optimization model (NLP model). Aspects about quality, water consumption and main operating parameters were considered. The minimization of total annual costs, which considered a trade-off between investment and operating costs, led to an optimal solution with lesser number of stages (2 instead of 3 stages) and higher volumes of the leaching tanks comparing with previous results. An analysis was performed in order to investigate how the optimal solution was influenced by the variations of the unitary cost of fresh water, waste treatment and capital investment.
Optimization of Aerospace Structure Subject to Damage Tolerance Criteria
NASA Technical Reports Server (NTRS)
Akgun, Mehmet A.
1999-01-01
The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers. It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages. A common method for topology optimization is that of compliance minimization which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system. Sherrnan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this. SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.
Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N; Meng, Fande
2015-01-01
Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program-FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ''best approach" depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds.
Trade-space Analysis for Constellations
NASA Astrophysics Data System (ADS)
Le Moigne, J.; Dabney, P.; de Weck, O. L.; Foreman, V.; Grogan, P.; Holland, M. P.; Hughes, S. P.; Nag, S.
2016-12-01
Traditionally, space missions have relied on relatively large and monolithic satellites, but in the past few years, under a changing technological and economic environment, including instrument and spacecraft miniaturization, scalable launchers, secondary launches as well as hosted payloads, there is growing interest in implementing future NASA missions as Distributed Spacecraft Missions (DSM). The objective of our project is to provide a framework that facilitates DSM Pre-Phase A investigations and optimizes DSM designs with respect to a-priori Science goals. In this first version of our Trade-space Analysis Tool for Constellations (TAT-C), we are investigating questions such as: "How many spacecraft should be included in the constellation? Which design has the best cost/risk value?" The main goals of TAT-C are to: Handle multiple spacecraft sharing a mission objective, from SmallSats up through flagships, Explore the variables trade space for pre-defined science, cost and risk goals, and pre-defined metrics Optimize cost and performance across multiple instruments and platforms vs. one at a time. This paper describes the overall architecture of TAT-C including: a User Interface (UI) interacting with multiple users - scientists, missions designers or program managers; an Executive Driver gathering requirements from UI, then formulating Trade-space Search Requests for the Trade-space Search Iterator first with inputs from the Knowledge Base, then, in collaboration with the Orbit & Coverage, Reduction & Metrics, and Cost& Risk modules, generating multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, streamlining the computations by modeling orbits in a way that balances accuracy and performance. TAT-C current version includes uniform Walker constellations as well as Ad-Hoc constellations, and its cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The Knowledge Base supports both analysis and exploration, and the current GUI prototype automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loflin, Leonard
Through this grant, the U.S. Department of Energy (DOE) will review several functional areas within a nuclear power plant, including fire protection, operations and operations support, refueling, training, procurement, maintenance, site engineering, and others. Several functional areas need to be examined since there appears to be no single staffing area or approach that alone has the potential for significant staff optimization at new nuclear power plants. Several of the functional areas will require a review of technology options such as automation, remote monitoring, fleet wide monitoring, new and specialized instrumentation, human factors engineering, risk informed analysis and PRAs, component andmore » system condition monitoring and reporting, just in time training, electronic and automated procedures, electronic tools for configuration management and license and design basis information, etc., that may be applied to support optimization. Additionally, the project will require a review key regulatory issues that affect staffing and could be optimized with additional technology input. Opportunities to further optimize staffing levels and staffing functions by selection of design attributes of physical systems and structures need also be identified. A goal of this project is to develop a prioritized assessment of the functional areas, and R&D actions needed for those functional areas, to provide the best optimization« less
Hu, Wenfa; He, Xinhua
2014-01-01
The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated.
Parametric geometric model and shape optimization of an underwater glider with blended-wing-body
NASA Astrophysics Data System (ADS)
Sun, Chunya; Song, Baowei; Wang, Peng
2015-11-01
Underwater glider, as a new kind of autonomous underwater vehicles, has many merits such as long-range, extended-duration and low costs. The shape of underwater glider is an important factor in determining the hydrodynamic efficiency. In this paper, a high lift to drag ratio configuration, the Blended-Wing-Body (BWB), is used to design a small civilian under water glider. In the parametric geometric model of the BWB underwater glider, the planform is defined with Bezier curve and linear line, and the section is defined with symmetrical airfoil NACA 0012. Computational investigations are carried out to study the hydrodynamic performance of the glider using the commercial Computational Fluid Dynamics (CFD) code Fluent. The Kriging-based genetic algorithm, called Efficient Global Optimization (EGO), is applied to hydrodynamic design optimization. The result demonstrates that the BWB underwater glider has excellent hydrodynamic performance, and the lift to drag ratio of initial design is increased by 7% in the EGO process.
Optimising the location of antenatal classes.
Tomintz, Melanie N; Clarke, Graham P; Rigby, Janette E; Green, Josephine M
2013-01-01
To combine microsimulation and location-allocation techniques to determine antenatal class locations which minimise the distance travelled from home by potential users. Microsimulation modeling and location-allocation modeling. City of Leeds, UK. Potential users of antenatal classes. An individual-level microsimulation model was built to estimate the number of births for small areas by combining data from the UK Census 2001 and the Health Survey for England 2006. Using this model as a proxy for service demand, we then used a location-allocation model to optimize locations. Different scenarios show the advantage of combining these methods to optimize (re)locating antenatal classes and therefore reduce inequalities in accessing services for pregnant women. Use of these techniques should lead to better use of resources by allowing planners to identify optimal locations of antenatal classes which minimise women's travel. These results are especially important for health-care planners tasked with the difficult issue of targeting scarce resources in a cost-efficient, but also effective or accessible, manner. (169 words). Copyright © 2011 Elsevier Ltd. All rights reserved.
Multiple-objective optimization in precision laser cutting of different thermoplastics
NASA Astrophysics Data System (ADS)
Tamrin, K. F.; Nukman, Y.; Choudhury, I. A.; Shirley, S.
2015-04-01
Thermoplastics are increasingly being used in biomedical, automotive and electronics industries due to their excellent physical and chemical properties. Due to the localized and non-contact process, use of lasers for cutting could result in precise cut with small heat-affected zone (HAZ). Precision laser cutting involving various materials is important in high-volume manufacturing processes to minimize operational cost, error reduction and improve product quality. This study uses grey relational analysis to determine a single optimized set of cutting parameters for three different thermoplastics. The set of the optimized processing parameters is determined based on the highest relational grade and was found at low laser power (200 W), high cutting speed (0.4 m/min) and low compressed air pressure (2.5 bar). The result matches with the objective set in the present study. Analysis of variance (ANOVA) is then carried out to ascertain the relative influence of process parameters on the cutting characteristics. It was found that the laser power has dominant effect on HAZ for all thermoplastics.
Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods
Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.
2013-01-01
Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822
Mairinger, Fabian D; Vollbrecht, Claudia; Streubel, Anna; Roth, Andreas; Landt, Olfert; Walter, Henry F R; Kollmeier, Jens; Mairinger, Thomas
2014-01-01
Activating epidermal growth factor receptor (EGFR) gene mutations can be successfully treated by EGFR tyrosine kinase inhibitors (EGFR-TKIs), but nearly 50% of all patients' exhibit progression of the disease until treatment because of T790M mutations. It is proposed that this is mostly caused by therapy-resistant tumor clones harboring a T790M mutation. Until now no cost-effective routine-diagnostic method for EGFR-resistance mutation status analysis is available leaving long-time response to TKI treatment to chance. Unambiguous identification of T790M EGFR mutations is mandatory to optimize initial treatment strategies. Artificial EGFR T790M mutations and human wild-type gDNA were prepared in several dilution series. Preferential amplification using coamplification at lower denaturation temperature-PCR (COLD-PCR) of the mutant sequence and subsequent HybProbe melting curve detection or pyrosequencing were performed in comparison to normal processing. COLD-PCR-based amplification allowed the detection of 0.125% T790M mutant DNA in a background of wild-type DNA in comparison to 5% while normal processing. These results were reproducible. COLD-PCR is a powerful and cost-effective tool for routine diagnostic to detect underrepresented tumor clones in clinical samples. A diagnostic tool for unambiguous identification of T790M-mutated minor tumor clones is now available enabling optimized therapy.
Eggimann, Sven; Truffer, Bernhard; Maurer, Max
2016-10-15
To determine the optimal connection rate (CR) for regional waste water treatment is a challenge that has recently gained the attention of academia and professional circles throughout the world. We contribute to this debate by proposing a framework for a total cost assessment of sanitation infrastructures in a given region for the whole range of possible CRs. The total costs comprise the treatment and transportation costs of centralised and on-site waste water management systems relative to specific CRs. We can then identify optimal CRs that either deliver waste water services at the lowest overall regional cost, or alternatively, CRs that result from households freely choosing whether they want to connect or not. We apply the framework to a Swiss region, derive a typology for regional cost curves and discuss whether and by how much the empirically observed CRs differ from the two optimal ones. Both optimal CRs may be reached by introducing specific regulatory incentive structures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches
NASA Astrophysics Data System (ADS)
Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo
This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.
NASA Astrophysics Data System (ADS)
Moraes, M. G. A.; Souza da Silva, G.
2016-12-01
Hydro-economic models can measure the economic effects of different operating rules, environmental restrictions, ecosystems services, technical constraints and institutional constraints. Furthermore, water allocation can be improved by considering economical criteria's. Likewise, climate and land use change can be analyzed to provide resilience. We developed and applied a hydro-economic optimization model to determine the optimal water allocation of main users in the Lower-middle São Francisco River Basin in Northeast (NE) Brazil. The model uses demand curves for the irrigation projects, small farmers and human supply, rather than fixed requirements for water resources. This study analyzed various constraints and operating alternatives for the installed hydropower dams in economic terms. A seven-year period (2000-2006) with water scarcity in the past has been selected to analyze the water availability and the associated optimal economic water allocation. The used constraints are technical, socioeconomic and environmental. The economically impacts of scenarios like prioritizing human consumption, impacts of the implementation of the São Francisco river transposition, human supply without high distribution losses, environmental hydrographs, forced reservoir level control, forced reduced reservoir capacity, alteration of lower flow restriction were analyzed. The results in this period show that scarcity costs related ecosystem service and environmental constraints are significant, and have major impacts (increase of scarcity cost) for consumptive users like irrigation projects. In addition, institutional constraints such as prioritizing human supply, minimum release limits downstream of the reservoirs and the implementation of the transposition project impact the costs and benefits of the two main economic sectors (irrigation and power generation) in the region of the Lower-middle of the São Francisco river basin. Scarcity costs for irrigation users generally increase more (in percentage terms) than the other users associated to environmental and institutional constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yaghoobpour Tari, S; Wachowicz, K; Fallone, B
2016-06-15
Purpose: A prototype rotating hybrid MR imaging system and linac has been developed to allow for simultaneous imaging and radiation delivery parallel to B{sub 0}. However, the design of a compact magnet capable of rotation in a small vault with sufficient patient access and a typical clinical source-to-surface distance (SSD) is challenging. This work presents a novel superconducting magnet design that allows for a reduced SSD and ample patient access by moving the superconducting coils to the side of the yoke. The yoke and pole-plate structures are shaped to direct the magnetic flux appropriately. Methods: The surface of the polemore » plate for the magnet assembly is optimized. The magnetic field calculations required in this work are performed with the 3D finite element method software package Opera-3D. Each tentative design strategy is virtually modeled in this software package and externally controlled by MATLAB, with its key geometries defined as variables. The particle swarm optimization algorithm is used to optimize the variables subject to the minimization of a cost function. At each iteration, Opera-3D will solve the magnetic field solution over a field-of-view suitable for MR imaging and the degree of field uniformity will be assessed to calculate the value of the cost function associated with that iteration. Results: An optimized magnet assembly that generates a homogenous 0.2T magnetic field over an ellipsoid with large axis of 30 cm and small axes of 20 cm is obtained. Conclusion: The distinct features of this model are the minimal distance between the yoke’s top and the isocentre and the improved patient access. On the other hand, having homogeneity over an ellipsoid give us a larger field-of-view, essential for geometric accuracy of the MRI system. The increase of B{sub 0} from 0.2T in the present model to 0.5T is the subject of future work. Funding Sources: Alberta Innovates - Health Solutions (AIHS)| Disclosure and Conflict of Interest: B. Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta biplanar linac MR for commercialization).« less
Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function
NASA Astrophysics Data System (ADS)
Seo, Sang-Wha; Kim, Yong; Choi, Han Ho
2017-11-01
This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.
Graph theoretical analysis of complex networks in the brain
Stam, Cornelis J; Reijneveld, Jaap C
2007-01-01
Since the discovery of small-world and scale-free networks the study of complex systems from a network perspective has taken an enormous flight. In recent years many important properties of complex networks have been delineated. In particular, significant progress has been made in understanding the relationship between the structural properties of networks and the nature of dynamics taking place on these networks. For instance, the 'synchronizability' of complex networks of coupled oscillators can be determined by graph spectral analysis. These developments in the theory of complex networks have inspired new applications in the field of neuroscience. Graph analysis has been used in the study of models of neural networks, anatomical connectivity, and functional connectivity based upon fMRI, EEG and MEG. These studies suggest that the human brain can be modelled as a complex network, and may have a small-world structure both at the level of anatomical as well as functional connectivity. This small-world structure is hypothesized to reflect an optimal situation associated with rapid synchronization and information transfer, minimal wiring costs, as well as a balance between local processing and global integration. The topological structure of functional networks is probably restrained by genetic and anatomical factors, but can be modified during tasks. There is also increasing evidence that various types of brain disease such as Alzheimer's disease, schizophrenia, brain tumours and epilepsy may be associated with deviations of the functional network topology from the optimal small-world pattern. PMID:17908336
Game Theory and Risk-Based Levee System Design
NASA Astrophysics Data System (ADS)
Hui, R.; Lund, J. R.; Madani, K.
2014-12-01
Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.
NASA Astrophysics Data System (ADS)
Maples, B. L.; Alvarez, L. V.; Moreno, H. A.; Chilson, P. B.; Segales, A.
2017-12-01
Given that classical in-situ direct surveying for geomorphological subsurface information in rivers is time-consuming, labor-intensive, costly, and often involves high-risk activities, it is obvious that non-intrusive technologies, like UAS-based, LIDAR-based remote sensing, have a promising potential and benefits in terms of efficient and accurate measurement of channel topography over large areas within a short time; therefore, a tremendous amount of attention has been paid to the development of these techniques. Over the past two decades, efforts have been undertaken to develop a specialized technique that can penetrate the water body and detect the channel bed to derive river and coastal bathymetry. In this research, we develop a low-cost effective technique for water body bathymetry. With the use of a sUAS and a light-weight sonar, the bathymetry and volume of a small reservoir have been surveyed. The sUAS surveying approach is conducted under low altitudes (2 meters from the water) using the sUAS to tow a small boat with the sonar attached. A cluster analysis is conducted to optimize the sUAS data collection and minimize the standard deviation created by under-sampling in areas of highly variable bathymetry, so measurements are densified in regions featured by steep slopes and drastic changes in the reservoir bed. This technique provides flexibility, efficiency, and free-risk to humans while obtaining high-quality information. The irregularly-spaced bathymetric survey is then interpolated using unstructured Triangular Irregular Network (TIN)-based maps to avoid re-gridding or re-sampling issues.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
NASA Astrophysics Data System (ADS)
Saavedra, Juan Alejandro
Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.
Optimized solar-wind-powered drip irrigation for farming in developing countries
NASA Astrophysics Data System (ADS)
Barreto, Carolina M.
The two billion people produce 80% of all food consumed in the developing world and 1.3 billion lack access to electricity. Agricultural production will have to increase by about 70% worldwide by 2050 and to achieve this about 50% more primary energy has to be made available by 2035. Energy-smart agri-food systems can improve productivity in the food sector, reduce energy poverty in rural areas and contribute to achieving food security and sustainable development. Agriculture can help reduce poverty for 75% of the world's poor, who live in rural areas and work mainly in farming. The costs associated with irrigation pumping are directly affected by energy prices and have a strong impact on farmer income. Solar-wind (SW) drip irrigation (DI) is a sustainable method to meet these challenges. This dissertation shows with onsite data the low cost of SW pumping technologies correlating the water consumption (evapotranspiration) and the water production (SW pumping). The author designed, installed, and collected operating data from the six SWDI systems in Peru and in the Tohono O'odham Nation in AZ. The author developed, tested, and a simplified model for solar engineers to size SWDI systems. The author developed a business concept to scale up the SWDI technology. The outcome was a simplified design approach for a DI system powered by low cost SW pumping systems optimized based on the logged on site data. The optimization showed that the SWDI system is an income generating technology and that by increasing the crop production per unit area, it allowed small farmers to pay for the system. The efficient system resulted in increased yields, sometimes three to four fold. The system is a model for smallholder agriculture in developing countries and can increase nutrition and greater incomes for the world's poor.
NASA Technical Reports Server (NTRS)
Diaz-Aguado, Millan F.; VanOutryve, Cassandra; Ghassemiah, Shakib; Beasley, Christopher; Schooley, Aaron
2009-01-01
Small spacecraft have been increasing in popularity because of their low cost, short turnaround and relative efficiency. In the past, small spacecraft have been primarily used for technology demonstrations, but advances in technology have made the miniaturization of space science possible [1,2]. PharmaSat is a low cost, small three cube size spacecraft, with a biological experiment on board, built at NASA (National Aeronautics and Space Administration) Ames Research Center. The thermal design of small spacecraft presents challenges as their smaller surface areas translate into power and thermal constraints. The spacecraft is thermally designed to run colder in the Low Earth Orbit space environment, and heated to reach the temperatures required by the science payload. The limited power supply obtained from the solar panels on small surfaces creates a constraint in the power used to heat the payload to required temperatures. The pressurized payload is isolated with low thermally conductance paths from the large ambient temperature changes. The thermal design consists of different optical properties of section surfaces, Multi Layer Insulation (MLI), low thermal conductance materials, flexible heaters and thermal spreaders. The payload temperature is controlled with temperature sensors and flexible heaters. Finite Element Analysis (FEA) and testing were used to aid the thermal design of the spacecraft. Various tests were conducted to verify the thermal design. An infrared imager was used on the electronic boards to find large heat sources and eliminate any possible temperature runaways. The spacecraft was tested in a thermal vacuum chamber to optimize the thermal and power analysis and qualify the thermal design of the spacecraft for the mission.
Conception et analyse d'un systeme d'optimisation de plans de vol pour les avions
NASA Astrophysics Data System (ADS)
Maazoun, Wissem
The main objective of this thesis is to develop an optimization method for the preparation of flight plans for aircrafts. The flight plan minimizes all costs associated with the flight. We determine an optimal path for an airplane from a departure airport to a destination airport. The optimal path minimizes the sum of all costs, i.e. the cost of fuel added to the cost of time (wages, rental of the aircraft, arrival delays, etc.). The optimal trajectory is obtained by considering all possible trajectories on a 3D graph (longitude, latitude and altitude) where the altitude levels are separated by 2,000 feet, and by applying a shortest path algorithm. The main task was to accurately compute fuel consumption on each edge of the graph, making sure that each arc has a minimal cost and is covered in a realistic way from the point of view of control, i.e. in accordance with the rules of navigation. To compute the cost of an arc, we take into account weather conditions (temperature, pressure, wind components, etc.). The optimization of each arc is done via the evaluation of an optimum speed that takes all costs into account. Each arc of the graph typically includes several sub-phases of the flight, e.g. altitude change, speed change, and constant speed and altitude. In the initial climb and the final descent phases, the costs are determined by considering altitude changes at constant CAS (Calibrated Air Speed) or constant Mach number. CAS and Mach number are adjusted to minimize cost. The aerodynamic model used is the one proposed by Eurocontrol, which uses the BADA (Base of Aircraft Data) tables. This model is based on the total energy equation that determines the instantaneous fuel consumption. Calculations on each arc are done by solving a system of differential equations that systematically takes all costs into account. To compute the cost of an arc, we must know the time to go through it, which is generally unknown. To have well-posed boundary conditions, we use the horizontal displacement as the independent variable of the system of differential equations. We consider the velocity components of the wind in a 3D system of coordinates to compute the instantaneous ground speed of the aircraft. To consider the cost of time, we use the cost index. The cost of an arc depends on the aircraft mass at the beginning of this arc, and this mass depends on the path. As we consider all possible paths, the cost of an arc must be computed for each trajectory to which it belongs. For a long-distance flight, the number of arcs to be considered in the graph is large and therefore the cost of an arc is typically computed many times. Our algorithm computes the costs of one million arcs in seconds while having a high accuracy. The determination of the optimal trajectory can therefore be done in a short time. To get the optimal path, the mass of the aircraft at the departure point must also be optimal. It is therefore necessary to know the optimal amount of fuel for the journey. The aircraft mass is known only at the arrival point. This mass is the mass of the aircraft including passengers, cargo and reserve fuel mass. The optimal path is determined by calculating backwards, i.e. from the arrival point to the departure point. For the determination of the optimal trajectory, we use an elliptical grid that has focal points at the departure and arrival points. The use of this grid is essential for the construction of a direct and acyclic graph. We use the Bellman-Ford algorithm on a DAG to determine the shortest path. This algorithm is easy to implement and results in short computation times. Our algorithm computes an optimal trajectory with an optimal cost for each arc. Altitude changes are done optimally with respect to the mass of the aircraft and the cost of time. Our algorithm gives the mass, speed, altitude and total cost at any point of the trajectory as well as the optimal profiles of climb and descent. A prototype has been implemented in C. We made simulations of all types of possible arcs and of several complete trajectories to illustrate the behaviour of the algorithm.
Economic analysis of transmission line engineering based on industrial engineering
NASA Astrophysics Data System (ADS)
Li, Yixuan
2017-05-01
The modern industrial engineering is applied to the technical analysis and cost analysis of power transmission and transformation engineering. It can effectively reduce the cost of investment. First, the power transmission project is economically analyzed. Based on the feasibility study of power transmission and transformation project investment, the proposal on the company system cost management is put forward through the economic analysis of the effect of the system. The cost management system is optimized. Then, through the cost analysis of power transmission and transformation project, the new situation caused by the cost of construction is found. It is of guiding significance to further improve the cost management of power transmission and transformation project. Finally, according to the present situation of current power transmission project cost management, concrete measures to reduce the cost of power transmission project are given from the two aspects of system optimization and technology optimization.
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
Estimation of optimal educational cost per medical student.
Yang, Eunbae B; Lee, Seunghee
2009-09-01
This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.
Algorithm For Optimal Control Of Large Structures
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Garba, John A..; Utku, Senol
1989-01-01
Cost of computation appears competitive with other methods. Problem to compute optimal control of forced response of structure with n degrees of freedom identified in terms of smaller number, r, of vibrational modes. Article begins with Hamilton-Jacobi formulation of mechanics and use of quadratic cost functional. Complexity reduced by alternative approach in which quadratic cost functional expressed in terms of control variables only. Leads to iterative solution of second-order time-integral matrix Volterra equation of second kind containing optimal control vector. Cost of algorithm, measured in terms of number of computations required, is of order of, or less than, cost of prior algoritms applied to similar problems.
Affordable CZT SPECT with dose-time minimization (Conference Presentation)
NASA Astrophysics Data System (ADS)
Hugg, James W.; Harris, Brian W.; Radley, Ian
2017-03-01
PURPOSE Pixelated CdZnTe (CZT) detector arrays are used in molecular imaging applications that can enable precision medicine, including small-animal SPECT, cardiac SPECT, molecular breast imaging (MBI), and general purpose SPECT. The interplay of gamma camera, collimator, gantry motion, and image reconstruction determines image quality and dose-time-FOV tradeoffs. Both dose and exam time can be minimized without compromising diagnostic content. METHODS Integration of pixelated CZT detectors with advanced ASICs and readout electronics improves system performance. Because historically CZT was expensive, the first clinical applications were limited to small FOV. Radiation doses were initially high and exam times long. Advances have significantly improved efficiency of CZT-based molecular imaging systems and the cost has steadily declined. We have built a general purpose SPECT system using our 40 cm x 53 cm CZT gamma camera with 2 mm pixel pitch and characterized system performance. RESULTS Compared to NaI scintillator gamma cameras: intrinsic spatial resolution improved from 3.8 mm to 2.0 mm; energy resolution improved from 9.8% to <4 % at 140 keV; maximum count rate is <1.5 times higher; non-detection camera edges are reduced 3-fold. Scattered photons are greatly reduced in the photopeak energy window; image contrast is improved; and the optimal FOV is increased to the entire camera area. CONCLUSION Continual improvements in CZT detector arrays for molecular imaging, coupled with optimal collimator and image reconstruction, result in minimized dose and exam time. With CZT cost improving, affordable whole-body CZT general purpose SPECT is expected to enable precision medicine applications.
A system-level cost-of-energy wind farm layout optimization with landowner modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Le; MacDonald, Erin
This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under twomore » land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.« less
NASA Astrophysics Data System (ADS)
Wang, Wu; Huang, Wei; Zhang, Yongjun
2018-03-01
The grid-integration of Photovoltaic-Storage System brings some undefined factors to the network. In order to make full use of the adjusting ability of Photovoltaic-Storage System (PSS), this paper puts forward a reactive power optimization model, which are used to construct the objective function based on power loss and the device adjusting cost, including energy storage adjusting cost. By using Cataclysmic Genetic Algorithm to solve this optimization problem, and comparing with other optimization method, the result proved that: the method of dynamic extended reactive power optimization this article puts forward, can enhance the effect of reactive power optimization, including reducing power loss and device adjusting cost, meanwhile, it gives consideration to the safety of voltage.
NASA Astrophysics Data System (ADS)
Olivia, G.; Santoso, A.; Prayogo, D. N.
2017-11-01
Nowadays, the level of competition between supply chains is getting tighter and a good coordination system between supply chains members is very crucial in solving the issue. This paper focused on a model development of coordination system between single supplier and buyers in a supply chain as a solution. Proposed optimization model was designed to determine the optimal number of deliveries from a supplier to buyers in order to minimize the total cost over a planning horizon. Components of the total supply chain cost consist of transportation costs, handling costs of supplier and buyers and also stock out costs. In the proposed optimization model, the supplier can supply various types of items to retailers whose item demand patterns are probabilistic. Sensitivity analysis of the proposed model was conducted to test the effect of changes in transport costs, handling costs and production capacities of the supplier. The results of the sensitivity analysis showed a significant influence on the changes in the transportation cost, handling costs and production capacity to the decisions of the optimal numbers of product delivery for each item to the buyers.
Research in the design of high-performance reconfigurable systems
NASA Technical Reports Server (NTRS)
Mcewan, S. D.; Spry, A. J.
1985-01-01
Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.
NASA Astrophysics Data System (ADS)
Kim, U.; Parker, J.
2016-12-01
Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.
Predictive Optimal Control of Active and Passive Building Thermal Storage Inventory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor P. Henze; Moncef Krarti
2005-09-30
Cooling of commercial buildings contributes significantly to the peak demand placed on an electrical utility grid. Time-of-use electricity rates encourage shifting of electrical loads to off-peak periods at night and weekends. Buildings can respond to these pricing signals by shifting cooling-related thermal loads either by precooling the building's massive structure or the use of active thermal energy storage systems such as ice storage. While these two thermal batteries have been engaged separately in the past, this project investigated the merits of harnessing both storage media concurrently in the context of predictive optimal control. To pursue the analysis, modeling, and simulationmore » research of Phase 1, two separate simulation environments were developed. Based on the new dynamic building simulation program EnergyPlus, a utility rate module, two thermal energy storage models were added. Also, a sequential optimization approach to the cost minimization problem using direct search, gradient-based, and dynamic programming methods was incorporated. The objective function was the total utility bill including the cost of reheat and a time-of-use electricity rate either with or without demand charges. An alternative simulation environment based on TRNSYS and Matlab was developed to allow for comparison and cross-validation with EnergyPlus. The initial evaluation of the theoretical potential of the combined optimal control assumed perfect weather prediction and match between the building model and the actual building counterpart. The analysis showed that the combined utilization leads to cost savings that is significantly greater than either storage but less than the sum of the individual savings. The findings reveal that the cooling-related on-peak electrical demand of commercial buildings can be considerably reduced. A subsequent analysis of the impact of forecasting uncertainty in the required short-term weather forecasts determined that it takes only very simple short-term prediction models to realize almost all of the theoretical potential of this control strategy. Further work evaluated the impact of modeling accuracy on the model-based closed-loop predictive optimal controller to minimize utility cost. The following guidelines have been derived: For an internal heat gain dominated commercial building, reasonable geometry simplifications are acceptable without a loss of cost savings potential. In fact, zoning simplification may improve optimizer performance and save computation time. The mass of the internal structure did not show a strong effect on the optimization. Building construction characteristics were found to impact building passive thermal storage capacity. It is thus advisable to make sure the construction material is well modeled. Zone temperature setpoint profiles and TES performance are strongly affected by mismatches in internal heat gains, especially when they are underestimated. Since they are a key factor in determining the building cooling load, efforts should be made to keep the internal gain mismatch as small as possible. Efficiencies of the building energy systems affect both zone temperature setpoints and active TES operation because of the coupling of the base chiller for building precooling and the icemaking TES chiller. Relative efficiencies of the base and TES chillers will determine the balance of operation of the two chillers. The impact of mismatch in this category may be significant. Next, a parametric analysis was conducted to assess the effects of building mass, utility rate, building location and season, thermal comfort, central plant capacities, and an economizer on the cost saving performance of optimal control for active and passive building thermal storage inventory. The key findings are: (1) Heavy-mass buildings, strong-incentive time-of-use electrical utility rates, and large on-peak cooling loads will likely lead to attractive savings resulting from optimal combined thermal storage control. (2) By using economizer to take advantage of the cool fresh air during the night, the building electrical cost can be reduced by using less mechanical cooling. (3) Larger base chiller and active thermal storage capacities have the potential of shifting more cooling loads to off-peak hours and thus higher savings can be achieved. (4) Optimal combined thermal storage control with a thermal comfort penalty included in the objective function can improve the thermal comfort levels of building occupants when compared to the non-optimized base case. Lab testing conducted in the Larson HVAC Laboratory during Phase 2 showed that the EnergyPlus-based simulation was a surprisingly accurate prediction of the experiment. Therefore, actual savings of building energy costs can be expected by applying optimal controls from simulation results.« less
Watershed Controls on the Proper Scale of Economic Markets for Pollution Reduction
NASA Astrophysics Data System (ADS)
Rigby, J.; Doyle, M. W.; Yates, A.
2010-12-01
Markets for tradable discharge permits (TDPs) are an increasingly popular policy instrument for obtaining cost-effective nutrient reduction targets across watersheds. Such markets are also an emerging, dynamic coupling between economic institutions and stream hydrology/biogeochemistry as trading markets become explicit determinants for the spatial distribution of stream nutrient loads. A central problem in any environmental market program is setting the size of the market, as there are distinct trade-offs for large versus small markets. While the overall cost-effectiveness of permit trading increases with the size of the market, the potential for localized and highly damaging nutrient concentrations, or “hotspots”, also increases. Smaller market size reduces the potential for hot spots by dispersing the location of trades, but this may increase the net costs of water quality compliance significantly through both the restriction of possible trading partners and price manipulation by market participants. This project couples a microeconomic model for TDPs (based on possible configurations of mutually exclusive trading zones within the basin) with a semi-distributed water quality model to examine watershed controls on the configuration and scale of such markets. Our results show a wide variation in total annual cost of pollution abatement based on choice of market design -- often with large differences in cost between very similar configurations. This framework is also applied to a 10-member trading program among wastewater treatment plants in the Neuse River, NC, in order to assess (1) the optimum market design for the Upper Neuse basin and (2) how these costs compare with expected costs under alternative market structures (e.g., trading ratio system) and (3) the cost improvements over traditional command-and-control regulatory frameworks. We find that the optimal zone configuration is almost always a lower cost option when compared to a trading ratio scheme and that the optimal design depends largely on the range of plant sizes and their geographic distribution within the stream network. Leveraging this model, we can develop a heuristic understanding of how the shape or topography of watersheds, and/or the spatial distribution of polluters may constrain the utility of market mechanisms in water quality regulation.
Design optimization of embedded ultrasonic transducers for concrete structures assessment.
Dumoulin, Cédric; Deraemaeker, Arnaud
2017-08-01
In the last decades, the field of structural health monitoring and damage detection has been intensively explored. Active vibration techniques allow to excite structures at high frequency vibrations which are sensitive to small damage. Piezoelectric PZT transducers are perfect candidates for such testing due to their small size, low cost and large bandwidth. Current ultrasonic systems are based on external piezoelectric transducers which need to be placed on two faces of the concrete specimen. The limited accessibility of in-service structures makes such an arrangement often impractical. An alternative is to embed permanently low-cost transducers inside the structure. Such types of transducers have been applied successfully for the in-situ estimation of the P-wave velocity in fresh concrete, and for crack monitoring. Up to now, the design of such transducers was essentially based on trial and error, or in a few cases, on the limitation of the acoustic impedance mismatch between the PZT and concrete. In the present study, we explore the working principles of embedded piezoelectric transducers which are found to be significantly different from external transducers. One of the major challenges concerning embedded transducers is to produce very low cost transducers. We show that a practical way to achieve this imperative is to consider the radial mode of actuation of bulk PZT elements. This is done by developing a simple finite element model of a piezoelectric transducer embedded in an infinite medium. The model is coupled with a multi-objective genetic algorithm which is used to design specific ultrasonic embedded transducers both for hard and fresh concrete monitoring. The results show the efficiency of the approach and a few designs are proposed which are optimal for hard concrete, fresh concrete, or both, in a given frequency band of interest. Copyright © 2017 Elsevier B.V. All rights reserved.
Directed Differentiation of Embryonic Stem Cells Using a Bead-Based Combinatorial Screening Method
Tarunina, Marina; Hernandez, Diana; Johnson, Christopher J.; Rybtsov, Stanislav; Ramathas, Vidya; Jeyakumar, Mylvaganam; Watson, Thomas; Hook, Lilian; Medvinsky, Alexander; Mason, Chris; Choo, Yen
2014-01-01
We have developed a rapid, bead-based combinatorial screening method to determine optimal combinations of variables that direct stem cell differentiation to produce known or novel cell types having pre-determined characteristics. Here we describe three experiments comprising stepwise exposure of mouse or human embryonic cells to 10,000 combinations of serum-free differentiation media, through which we discovered multiple novel, efficient and robust protocols to generate a number of specific hematopoietic and neural lineages. We further demonstrate that the technology can be used to optimize existing protocols in order to substitute costly growth factors with bioactive small molecules and/or increase cell yield, and to identify in vitro conditions for the production of rare developmental intermediates such as an embryonic lymphoid progenitor cell that has not previously been reported. PMID:25251366
1.25-3.125 Gb/s per user PON with RSOA as phase modulator for statistical wavelength ONU
NASA Astrophysics Data System (ADS)
Chu, Guang Yong; Polo, Victor; Lerín, Adolfo; Tabares, Jeison; Cano, Iván N.; Prat, Josep
2015-12-01
We report a new scheme to support, cost efficiently, ultra-dense wavelength division multiplexing (UDWDM) for optical access networks. As validating experiment, we apply phase modulation of a reflective semiconductor optical amplifier (RSOA) at the ONU with a single DFB, and simplified coherent receiver at OLT for upstream. We extend the limited 3-dB modulation bandwidth of available uncooled To-can packaged RSOA (~400 MHz) and operate it at 3.125 Gb/s with the optimal performance for phase modulation using small and large signal measurement characteristics. The optimal condition is selected at input power of 0 dBm, with 70 mA bias condition. The sensitivities at 3.125 Gb/s (at BER=10-3) for heterodyne and intradyne detection reach -34.3 dBm and -38.8 dBm, respectively.
Optimizing the scale of markets for water quality trading
NASA Astrophysics Data System (ADS)
Doyle, Martin W.; Patterson, Lauren A.; Chen, Yanyou; Schnier, Kurt E.; Yates, Andrew J.
2014-09-01
Applying market approaches to environmental regulations requires establishing a spatial scale for trading. Spatially large markets usually increase opportunities for abatement cost savings but increase the potential for pollution damages (hot spots), vice versa for spatially small markets. We develop a coupled hydrologic-economic modeling approach for application to point source emissions trading by a large number of sources and apply this approach to the wastewater treatment plants (WWTPs) within the watershed of the second largest estuary in the U.S. We consider two different administrative structures that govern the trade of emission permits: one-for-one trading (the number of permits required for each unit of emission is the same for every WWTP) and trading ratios (the number of permits required for each unit of emissions varies across WWTP). Results show that water quality regulators should allow trading to occur at the river basin scale as an appropriate first-step policy, as is being done in a limited number of cases via compliance associations. Larger spatial scales may be needed under conditions of increased abatement costs. The optimal scale of the market is generally the same regardless of whether one-for-one trading or trading ratios are employed.
A Framework for Optimizing Phytosanitary Thresholds in Seed Systems.
Choudhury, Robin Alan; Garrett, Karen A; Klosterman, Steven J; Subbarao, Krishna V; McRoberts, Neil
2017-10-01
Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps limit the movement of pathogen inoculum through seed, but the costs associated with rejected seed lots can be prohibitive for voluntary implementation of phytosanitary thresholds. In this paper, we outline a framework to optimize thresholds for seedborne pathogens, balancing the cost of rejected seed lots and benefit of reduced inoculum levels. The method requires relatively small amounts of data, and the accuracy and robustness of the analysis improves over time as data accumulate from seed testing. We demonstrate the method first and illustrate it with a case study of seedborne oospores of Peronospora effusa, the causal agent of spinach downy mildew. A seed lot threshold of 0.23 oospores per seed could reduce the overall number of oospores entering the production system by 90% while removing 8% of seed lots destined for distribution. Alternative mitigation strategies may result in lower economic losses to seed producers, but have uncertain efficacy. We discuss future challenges and prospects for implementing this approach.
A PC program to optimize system configuration for desired reliability at minimum cost
NASA Technical Reports Server (NTRS)
Hills, Steven W.; Siahpush, Ali S.
1994-01-01
High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.
Optimal Control of Induction Machines to Minimize Transient Energy Losses
NASA Astrophysics Data System (ADS)
Plathottam, Siby Jose
Induction machines are electromechanical energy conversion devices comprised of a stator and a rotor. Torque is generated due to the interaction between the rotating magnetic field from the stator, and the current induced in the rotor conductors. Their speed and torque output can be precisely controlled by manipulating the magnitude, frequency, and phase of the three input sinusoidal voltage waveforms. Their ruggedness, low cost, and high efficiency have made them ubiquitous component of nearly every industrial application. Thus, even a small improvement in their energy efficient tend to give a large amount of electrical energy savings over the lifetime of the machine. Hence, increasing energy efficiency (reducing energy losses) in induction machines is a constrained optimization problem that has attracted attention from researchers. The energy conversion efficiency of induction machines depends on both the speed-torque operating point, as well as the input voltage waveform. It also depends on whether the machine is in the transient or steady state. Maximizing energy efficiency during steady state is a Static Optimization problem, that has been extensively studied, with commercial solutions available. On the other hand, improving energy efficiency during transients is a Dynamic Optimization problem that is sparsely studied. This dissertation exclusively focuses on improving energy efficiency during transients. This dissertation treats the transient energy loss minimization problem as an optimal control problem which consists of a dynamic model of the machine, and a cost functional. The rotor field oriented current fed model of the induction machine is selected as the dynamic model. The rotor speed and rotor d-axis flux are the state variables in the dynamic model. The stator currents referred to as d-and q-axis currents are the control inputs. A cost functional is proposed that assigns a cost to both the energy losses in the induction machine, as well as the deviations from desired speed-torque-magnetic flux setpoints. Using Pontryagin's minimum principle, a set of necessary conditions that must be satisfied by the optimal control trajectories are derived. The conditions are in the form a two-point boundary value problem, that can be solved numerically. The conjugate gradient method that was modified using the Hestenes-Stiefel formula was used to obtain the numerical solution of both the control and state trajectories. Using the distinctive shape of the numerical trajectories as inspiration, analytical expressions were derived for the state, and control trajectories. It was shown that the trajectory could be fully described by finding the solution of a one-dimensional optimization problem. The sensitivity of both the optimal trajectory and the optimal energy efficiency to different induction machine parameters were analyzed. A non-iterative solution that can use feedback for generating optimal control trajectories in real time was explored. It was found that an artificial neural network could be trained using the numerical solutions and made to emulate the optimal control trajectories with a high degree of accuracy. Hence a neural network along with a supervisory logic was implemented and used in a real-time simulation to control the Finite Element Method model of the induction machine. The results were compared with three other control regimes and the optimal control system was found to have the highest energy efficiency for the same drive cycle.
Improved mine blast algorithm for optimal cost design of water distribution systems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon
2015-12-01
The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.
Choi, Angelo Earvin Sy; Park, Hung Suck
2018-06-20
This paper presents the development and evaluation of fuzzy multi-objective optimization for decision-making that includes the process optimization of anaerobic digestion (AD) process. The operating cost criteria which is a fundamental research gap in previous AD analysis was integrated for the case study in this research. In this study, the mixing ratio of food waste leachate (FWL) and piggery wastewater (PWW), calcium carbonate (CaCO 3 ) and sodium chloride (NaCl) concentrations were optimized to enhance methane production while minimizing operating cost. The results indicated a maximum of 63.3% satisfaction for both methane production and operating cost under the following optimal conditions: mixing ratio (FWL: PWW) - 1.4, CaCO 3 - 2970.5 mg/L and NaCl - 2.7 g/L. In multi-objective optimization, the specific methane yield (SMY) was 239.0 mL CH 4 /g VS added , while 41.2% volatile solids reduction (VSR) was obtained at an operating cost of 56.9 US$/ton. In comparison with the previous optimization study that utilized the response surface methodology, the SMY, VSR and operating cost of the AD process were 310 mL/g, 54% and 83.2 US$/ton, respectively. The results from multi-objective fuzzy optimization proves to show the potential application of this technique for practical decision-making in the process optimization of AD process. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, F.W.; Todd, M.E.
1993-09-01
The release of large volumes of water to waste disposal cribs at the Hanford Site`s 100-N Area caused contaminants, principally strontium-90, to be carried toward the Columbia River through the groundwater. Since shutdown of the N Reactor, these releases have been discontinued, although small water flows continue to be discharged to the 1325-N crib. Most of the contamination which is now transported to the river is occurring as a result of the natural groundwater movement. The contaminated groundwater at N Springs flows into the river through seeps and springs along the river`s edge. An expedited response action (ERA) has beenmore » proposed to eliminate or restrict the flux of strontium-90 into the river. A cost benefit analysis of potential remedial alternatives was completed that recommends the alternative which best meets given selection criteria prescribed by the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). The methodology used for evaluation, cost analysis, and alternative recommendation is the engineering evaluation/cost analysis (EE/CA). Complete remediation of the contaminated groundwater beneath 100-N Area was not a principal objective of the analysis. The objective of the cost benefit analysis was to identify a remedial alternative that optimizes the degree of benefit produced for the costs incurred.« less
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE
NASA Technical Reports Server (NTRS)
Lee, H.
1994-01-01
For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.
Improvements in SMR Modular Construction through Supply Chain Optimization and Lessons Learned
DOE Office of Scientific and Technical Information (OSTI.GOV)
White III, Chelsea C.; Petrovic, Bojan
Affordable energy is a critical societal need. Capital construction cost is a significant portion of nuclear energy cost. By controlling and reducing cost, companies can build more competitive nuclear power plants and hence provide access to more affordable energy. Modular construction provides an opportunity to reduce the cost of construction, and as projects scale up in number, the cost of each unit can be further reduced. The objective of this project was to advance design and construction methods for manufacturing Small Modular Reactors (SMRs), and in particular to improve modular construction techniques and develop best practices for designing and operatingmore » supply chains that take advantage of these techniques. The overarching objectives were to accelerate the construction schedule and reduce its variability, reduce the cost of construction, reduce interest costs accrued during construction (IDC), and thus enhance the economic attractiveness of SMRs. Our fundamental measure of merit was total capital investment cost (TCIC). To achieve these objectives, this project developed a decision support system, EVAL, to support identifying, addressing, and resolving or ameliorating challenges and deficiencies in the current modular construction approach. The results of this effort were consistent with the facts that the cost of a construction activity is often smallest when accomplished in the factory, greatest when accomplished at the construction site, and at an intermediate level when accomplished at an assembly area close to the construction site. Further, EVAL can aid in providing insight into ways to reduce waste, improve quality, efficiency, and throughput and reflects the fact that the more done early in the construction process, i.e., in the factory, the more upfront funding is required and hence the more IDC will be accrued. The analysis has lead to a better understanding of circumstances under which modular construction performed mainly in the factory will result in lower expected total cost, relative to more traditional, on-site construction procedures. Further, we anticipate that EVAL can be used to gain insight regarding what role standardization can play in order for modularization to be most effectively defined. Such results would ultimately benefit all (small and large) new nuclear construction.« less
2014-01-01
The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated. PMID:24672351
Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm
NASA Astrophysics Data System (ADS)
Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim
2016-05-01
A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.
The Potential of Small Satellites for Crop Monitoring in Emerging Economies
NASA Astrophysics Data System (ADS)
Bydekerke, L.; Meuleman, K.
2008-08-01
The use of low resolution data for monitoring of the overall vegetation condition and crops is nowadays wide spread in emerging economies. Various initiatives, global and local, have promoted the use of this type of imagery for assessing the progress of the growing season since the eighties. The normalized difference vegetation Index (NDVI), from various sensors with 250m to 8 km resolution, are used to identify potential anomalies in vegetation development which, in combination with other data, are used to identify emerging crisis situations in crop development and production before harvest time. Satellite data is analyzed by specialized centers and crop / vegetation assessments are summarized into bulletins, which are then used for communication with non-remote sensing specialists at the policy level. Satellite data is currently provided by large expensive space infrastructures and centrally distributed to the users. In this paper the current flow of information from satellite to information for agriculture is analyzed and the potential contribution of low cost small satellite in addressing the needs of the users is discussed. Two scenario's are presented: i. a centralized system whereby a few institutes have access to data generated by small satellites which process and analyze the data for use by analysts; ii. a decentralized system whereby a variety of users have direct access to data generated by small satellites who are capable of extracting, processing and analyzing information relevant for crop monitoring. The work shows that with affordable space infrastructure, as small satellites, the second scenario may become possible, but the complexity and the cost of the ground segment service remain limiting factors. Expertise and knowledge for processing, analysis and maintenance of IT/infrastructure is currently not enough, specifically in Institutions whose mandate is dealing with crop monitoring, such as the Ministries of Agriculture. However, in the short term, a limited number of specialized centers, can play a key role in gradually facilitating the integration of remote sensing information into the daily workflow, and gradually optimizing costs and efforts. The potential use of future small satellite missions such as e.g. SPOT-Vegetation continuity mission (Proba-V) is also addressed.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel A.
2016-11-01
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.
NASA Astrophysics Data System (ADS)
Sanaye, Sepehr; Katebi, Arash
2014-02-01
Energy, exergy, economic and environmental (4E) analysis and optimization of a hybrid solid oxide fuel cell and micro gas turbine (SOFC-MGT) system for use as combined generation of heat and power (CHP) is investigated in this paper. The hybrid system is modeled and performance related results are validated using available data in literature. Then a multi-objective optimization approach based on genetic algorithm is incorporated. Eight system design parameters are selected for the optimization procedure. System exergy efficiency and total cost rate (including capital or investment cost, operational cost and penalty cost of environmental emissions) are the two objectives. The effects of fuel unit cost, capital investment and system power output on optimum design parameters are also investigated. It is observed that the most sensitive and important design parameter in the hybrid system is fuel cell current density which has a significant effect on the balance between system cost and efficiency. The selected design point from the Pareto distribution of optimization results indicates a total system exergy efficiency of 60.7%, with estimated electrical energy cost 0.057 kW-1 h-1, and payback period of about 6.3 years for the investment.
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
Zeindlhofer, Veronika; Schröder, Christian
2018-06-01
Based on their tunable properties, ionic liquids attracted significant interest to replace conventional, organic solvents in biomolecular applications. Following a Gartner cycle, the expectations on this new class of solvents dropped after the initial hype due to the high viscosity, hydrolysis, and toxicity problems as well as their high cost. Since not all possible combinations of cations and anions can be tested experimentally, fundamental knowledge on the interaction of the ionic liquid ions with water and with biomolecules is mandatory to optimize the solvation behavior, the biodegradability, and the costs of the ionic liquid. Here, we report on current computational approaches to characterize the impact of the ionic liquid ions on the structure and dynamics of the biomolecule and its solvation layer to explore the full potential of ionic liquids.
Optimized 4-bit Quantum Reversible Arithmetic Logic Unit
NASA Astrophysics Data System (ADS)
Ayyoub, Slimani; Achour, Benslama
2017-08-01
Reversible logic has received a great attention in the recent years due to its ability to reduce the power dissipation. The main purposes of designing reversible logic are to decrease quantum cost, depth of the circuits and the number of garbage outputs. The arithmetic logic unit (ALU) is an important part of central processing unit (CPU) as the execution unit. This paper presents a complete design of a new reversible arithmetic logic unit (ALU) that can be part of a programmable reversible computing device such as a quantum computer. The proposed ALU based on a reversible low power control unit and small performance parameters full adder named double Peres gates. The presented ALU can produce the largest number (28) of arithmetic and logic functions and have the smallest number of quantum cost and delay compared with existing designs.
Integrated strategic and tactical biomass-biofuel supply chain optimization.
Lin, Tao; Rodríguez, Luis F; Shastri, Yogendra N; Hansen, Alan C; Ting, K C
2014-03-01
To ensure effective biomass feedstock provision for large-scale biofuel production, an integrated biomass supply chain optimization model was developed to minimize annual biomass-ethanol production costs by optimizing both strategic and tactical planning decisions simultaneously. The mixed integer linear programming model optimizes the activities range from biomass harvesting, packing, in-field transportation, stacking, transportation, preprocessing, and storage, to ethanol production and distribution. The numbers, locations, and capacities of facilities as well as biomass and ethanol distribution patterns are key strategic decisions; while biomass production, delivery, and operating schedules and inventory monitoring are key tactical decisions. The model was implemented to study Miscanthus-ethanol supply chain in Illinois. The base case results showed unit Miscanthus-ethanol production costs were $0.72L(-1) of ethanol. Biorefinery related costs accounts for 62% of the total costs, followed by biomass procurement costs. Sensitivity analysis showed that a 50% reduction in biomass yield would increase unit production costs by 11%. Copyright © 2014 Elsevier Ltd. All rights reserved.
Procedure for minimizing the cost per watt of photovoltaic systems
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance
Sousa, Vitor; Dias-Ferreira, Celia; Vaz, João M; Meireles, Inês
2018-05-01
Extensive research has been carried out on waste collection costs mainly to differentiate costs of distinct waste streams and spatial optimization of waste collection services (e.g. routes, number, and location of waste facilities). However, waste collection managers also face the challenge of optimizing assets in time, for instance deciding when to replace and how to maintain, or which technological solution to adopt. These issues require a more detailed knowledge about the waste collection services' cost breakdown structure. The present research adjusts the methodology for buildings' life-cycle cost (LCC) analysis, detailed in the ISO 15686-5:2008, to the waste collection assets. The proposed methodology is then applied to the waste collection assets owned and operated by a real municipality in Portugal (Cascais Ambiente - EMAC). The goal is to highlight the potential of the LCC tool in providing a baseline for time optimization of the waste collection service and assets, namely assisting on decisions regarding equipment operation and replacement.
Socially optimal electric driving range of plug-in hybrid electric vehicles
Kontou, Eleftheria; Yin, Yafeng; Lin, Zhenhong
2015-07-25
Our study determines the optimal electric driving range of plug-in hybrid electric vehicles (PHEVs) that minimizes the daily cost borne by the society when using this technology. An optimization framework is developed and applied to datasets representing the US market. Results indicate that the optimal range is 16 miles with an average social cost of 3.19 per day when exclusively charging at home, compared to 3.27 per day of driving a conventional vehicle. The optimal range is found to be sensitive to the cost of battery packs and the price of gasoline. Moreover, when workplace charging is available, the optimalmore » electric driving range surprisingly increases from 16 to 22 miles, as larger batteries would allow drivers to better take advantage of the charging opportunities to achieve longer electrified travel distances, yielding social cost savings. If workplace charging is available, the optimal density is to deploy a workplace charger for every 3.66 vehicles. Finally, the diversification of the battery size, i.e., introducing a pair and triple of electric driving ranges to the market, could further decrease the average societal cost per PHEV by 7.45% and 11.5% respectively.« less
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
GMOtrack: generator of cost-effective GMO testing strategies.
Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana
2009-01-01
Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.
Trade-offs between robustness and small-world effect in complex networks
Peng, Guan-Sheng; Tan, Suo-Yi; Wu, Jun; Holme, Petter
2016-01-01
Robustness and small-world effect are two crucial structural features of complex networks and have attracted increasing attention. However, little is known about the relation between them. Here we demonstrate that, there is a conflicting relation between robustness and small-world effect for a given degree sequence. We suggest that the robustness-oriented optimization will weaken the small-world effect and vice versa. Then, we propose a multi-objective trade-off optimization model and develop a heuristic algorithm to obtain the optimal trade-off topology for robustness and small-world effect. We show that the optimal network topology exhibits a pronounced core-periphery structure and investigate the structural properties of the optimized networks in detail. PMID:27853301
A Q-Learning Approach to Flocking With UAVs in a Stochastic Environment.
Hung, Shao-Ming; Givigi, Sidney N
2017-01-01
In the past two decades, unmanned aerial vehicles (UAVs) have demonstrated their efficacy in supporting both military and civilian applications, where tasks can be dull, dirty, dangerous, or simply too costly with conventional methods. Many of the applications contain tasks that can be executed in parallel, hence the natural progression is to deploy multiple UAVs working together as a force multiplier. However, to do so requires autonomous coordination among the UAVs, similar to swarming behaviors seen in animals and insects. This paper looks at flocking with small fixed-wing UAVs in the context of a model-free reinforcement learning problem. In particular, Peng's Q(λ) with a variable learning rate is employed by the followers to learn a control policy that facilitates flocking in a leader-follower topology. The problem is structured as a Markov decision process, where the agents are modeled as small fixed-wing UAVs that experience stochasticity due to disturbances such as winds and control noises, as well as weight and balance issues. Learned policies are compared to ones solved using stochastic optimal control (i.e., dynamic programming) by evaluating the average cost incurred during flight according to a cost function. Simulation results demonstrate the feasibility of the proposed learning approach at enabling agents to learn how to flock in a leader-follower topology, while operating in a nonstationary stochastic environment.
Airfoil Design and Optimization by the One-Shot Method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1995-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Airfoil optimization by the one-shot method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1994-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Sequeira, Ana Filipa; Brás, Joana L A; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A
2016-12-01
Gene synthesis is becoming an important tool in many fields of recombinant DNA technology, including recombinant protein production. De novo gene synthesis is quickly replacing the classical cloning and mutagenesis procedures and allows generating nucleic acids for which no template is available. In addition, when coupled with efficient gene design algorithms that optimize codon usage, it leads to high levels of recombinant protein expression. Here, we describe the development of an optimized gene synthesis platform that was applied to the large scale production of small genes encoding venom peptides. This improved gene synthesis method uses a PCR-based protocol to assemble synthetic DNA from pools of overlapping oligonucleotides and was developed to synthesise multiples genes simultaneously. This technology incorporates an accurate, automated and cost effective ligation independent cloning step to directly integrate the synthetic genes into an effective Escherichia coli expression vector. The robustness of this technology to generate large libraries of dozens to thousands of synthetic nucleic acids was demonstrated through the parallel and simultaneous synthesis of 96 genes encoding animal toxins. An automated platform was developed for the large-scale synthesis of small genes encoding eukaryotic toxins. Large scale recombinant expression of synthetic genes encoding eukaryotic toxins will allow exploring the extraordinary potency and pharmacological diversity of animal venoms, an increasingly valuable but unexplored source of lead molecules for drug discovery.
Cui, Borui; Gao, Dian-ce; Xiao, Fu; ...
2016-12-23
This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Borui; Gao, Dian-ce; Xiao, Fu
This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less
Lewandowski, Iris; Clifton-Brown, John; Trindade, Luisa M; van der Linden, Gerard C; Schwarz, Kai-Uwe; Müller-Sämann, Karl; Anisimov, Alexander; Chen, C-L; Dolstra, Oene; Donnison, Iain S; Farrar, Kerrie; Fonteyne, Simon; Harding, Graham; Hastings, Astley; Huxley, Laurie M; Iqbal, Yasir; Khokhlov, Nikolay; Kiesel, Andreas; Lootens, Peter; Meyer, Heike; Mos, Michal; Muylle, Hilde; Nunn, Chris; Özgüven, Mensure; Roldán-Ruiz, Isabel; Schüle, Heinrich; Tarakanov, Ivan; van der Weijde, Tim; Wagner, Moritz; Xi, Qingguo; Kalinina, Olena
2016-01-01
This paper describes the complete findings of the EU-funded research project OPTIMISC, which investigated methods to optimize the production and use of miscanthus biomass. Miscanthus bioenergy and bioproduct chains were investigated by trialing 15 diverse germplasm types in a range of climatic and soil environments across central Europe, Ukraine, Russia, and China. The abiotic stress tolerances of a wider panel of 100 germplasm types to drought, salinity, and low temperatures were measured in the laboratory and a field trial in Belgium. A small selection of germplasm types was evaluated for performance in grasslands on marginal sites in Germany and the UK. The growth traits underlying biomass yield and quality were measured to improve regional estimates of feedstock availability. Several potential high-value bioproducts were identified. The combined results provide recommendations to policymakers, growers and industry. The major technical advances in miscanthus production achieved by OPTIMISC include: (1) demonstration that novel hybrids can out-yield the standard commercially grown genotype Miscanthus x giganteus; (2) characterization of the interactions of physiological growth responses with environmental variation within and between sites; (3) quantification of biomass-quality-relevant traits; (4) abiotic stress tolerances of miscanthus genotypes; (5) selections suitable for production on marginal land; (6) field establishment methods for seeds using plugs; (7) evaluation of harvesting methods; and (8) quantification of energy used in densification (pellet) technologies with a range of hybrids with differences in stem wall properties. End-user needs were addressed by demonstrating the potential of optimizing miscanthus biomass composition for the production of ethanol and biogas as well as for combustion. The costs and life-cycle assessment of seven miscanthus-based value chains, including small- and large-scale heat and power, ethanol, biogas, and insulation material production, revealed GHG-emission- and fossil-energy-saving potentials of up to 30.6 t CO 2eq C ha -1 y -1 and 429 GJ ha -1 y -1 , respectively. Transport distance was identified as an important cost factor. Negative carbon mitigation costs of -78€ t -1 CO 2eq C were recorded for local biomass use. The OPTIMISC results demonstrate the potential of miscanthus as a crop for marginal sites and provide information and technologies for the commercial implementation of miscanthus-based value chains.
Lewandowski, Iris; Clifton-Brown, John; Trindade, Luisa M.; van der Linden, Gerard C.; Schwarz, Kai-Uwe; Müller-Sämann, Karl; Anisimov, Alexander; Chen, C.-L.; Dolstra, Oene; Donnison, Iain S.; Farrar, Kerrie; Fonteyne, Simon; Harding, Graham; Hastings, Astley; Huxley, Laurie M.; Iqbal, Yasir; Khokhlov, Nikolay; Kiesel, Andreas; Lootens, Peter; Meyer, Heike; Mos, Michal; Muylle, Hilde; Nunn, Chris; Özgüven, Mensure; Roldán-Ruiz, Isabel; Schüle, Heinrich; Tarakanov, Ivan; van der Weijde, Tim; Wagner, Moritz; Xi, Qingguo; Kalinina, Olena
2016-01-01
This paper describes the complete findings of the EU-funded research project OPTIMISC, which investigated methods to optimize the production and use of miscanthus biomass. Miscanthus bioenergy and bioproduct chains were investigated by trialing 15 diverse germplasm types in a range of climatic and soil environments across central Europe, Ukraine, Russia, and China. The abiotic stress tolerances of a wider panel of 100 germplasm types to drought, salinity, and low temperatures were measured in the laboratory and a field trial in Belgium. A small selection of germplasm types was evaluated for performance in grasslands on marginal sites in Germany and the UK. The growth traits underlying biomass yield and quality were measured to improve regional estimates of feedstock availability. Several potential high-value bioproducts were identified. The combined results provide recommendations to policymakers, growers and industry. The major technical advances in miscanthus production achieved by OPTIMISC include: (1) demonstration that novel hybrids can out-yield the standard commercially grown genotype Miscanthus x giganteus; (2) characterization of the interactions of physiological growth responses with environmental variation within and between sites; (3) quantification of biomass-quality-relevant traits; (4) abiotic stress tolerances of miscanthus genotypes; (5) selections suitable for production on marginal land; (6) field establishment methods for seeds using plugs; (7) evaluation of harvesting methods; and (8) quantification of energy used in densification (pellet) technologies with a range of hybrids with differences in stem wall properties. End-user needs were addressed by demonstrating the potential of optimizing miscanthus biomass composition for the production of ethanol and biogas as well as for combustion. The costs and life-cycle assessment of seven miscanthus-based value chains, including small- and large-scale heat and power, ethanol, biogas, and insulation material production, revealed GHG-emission- and fossil-energy-saving potentials of up to 30.6 t CO2eq C ha−1y−1 and 429 GJ ha−1y−1, respectively. Transport distance was identified as an important cost factor. Negative carbon mitigation costs of –78€ t−1 CO2eq C were recorded for local biomass use. The OPTIMISC results demonstrate the potential of miscanthus as a crop for marginal sites and provide information and technologies for the commercial implementation of miscanthus-based value chains. PMID:27917177
Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario
2014-01-01
In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.
Optimization of joint energy micro-grid with cold storage
NASA Astrophysics Data System (ADS)
Xu, Bin; Luo, Simin; Tian, Yan; Chen, Xianda; Xiong, Botao; Zhou, Bowen
2018-02-01
To accommodate distributed photovoltaic (PV) curtailment, to make full use of the joint energy micro-grid with cold storage, and to reduce the high operating costs, the economic dispatch of joint energy micro-grid load is particularly important. Considering the different prices during the peak and valley durations, an optimization model is established, which takes the minimum production costs and PV curtailment fluctuations as the objectives. Linear weighted sum method and genetic-taboo Particle Swarm Optimization (PSO) algorithm are used to solve the optimization model, to obtain optimal power supply output. Taking the garlic market in Henan as an example, the simulation results show that considering distributed PV and different prices in different time durations, the optimization strategies are able to reduce the operating costs and accommodate PV power efficiently.
Cost optimization for buildings with hybrid ventilation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Kun; Lu, Yan
A method including: computing a total cost for a first zone in a building, wherein the total cost is equal to an actual energy cost of the first zone plus a thermal discomfort cost of the first zone; and heuristically optimizing the total cost to identify temperature setpoints for a mechanical heating/cooling system and a start time and an end time of the mechanical heating/cooling system, based on external weather data and occupancy data of the first zone.
Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations
NASA Astrophysics Data System (ADS)
Mansfield, Christopher M.; Shoemaker, Christine A.
1999-05-01
This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.
Replica Approach for Minimal Investment Risk with Cost
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-06-01
In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.
Trade-Space Analysis Tool for Constellations (TAT-C)
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Dabney, Philip; de Weck, Olivier; Foreman, Veronica; Grogan, Paul; Holland, Matthew; Hughes, Steven; Nag, Sreeja
2016-01-01
Traditionally, space missions have relied on relatively large and monolithic satellites, but in the past few years, under a changing technological and economic environment, including instrument and spacecraft miniaturization, scalable launchers, secondary launches as well as hosted payloads, there is growing interest in implementing future NASA missions as Distributed Spacecraft Missions (DSM). The objective of our project is to provide a framework that facilitates DSM Pre-Phase A investigations and optimizes DSM designs with respect to a-priori Science goals. In this first version of our Trade-space Analysis Tool for Constellations (TAT-C), we are investigating questions such as: How many spacecraft should be included in the constellation? Which design has the best costrisk value? The main goals of TAT-C are to: Handle multiple spacecraft sharing a mission objective, from SmallSats up through flagships, Explore the variables trade space for pre-defined science, cost and risk goals, and pre-defined metrics Optimize cost and performance across multiple instruments and platforms vs. one at a time.This paper describes the overall architecture of TAT-C including: a User Interface (UI) interacting with multiple users - scientists, missions designers or program managers; an Executive Driver gathering requirements from UI, then formulating Trade-space Search Requests for the Trade-space Search Iterator first with inputs from the Knowledge Base, then, in collaboration with the Orbit Coverage, Reduction Metrics, and Cost Risk modules, generating multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, streamlining the computations by modeling orbits in a way that balances accuracy and performance.TAT-C current version includes uniform Walker constellations as well as Ad-Hoc constellations, and its cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The Knowledge Base supports both analysis and exploration, and the current GUI prototype automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, K.; Paramonov, D.
2002-07-01
IRIS (International Reactor Innovative and Secure) is a small to medium advanced light water cooled modular reactor being developed by an international consortium led by Westinghouse/BNFL. This reactor design is specifically aimed at utilities looking to install new (or replacement) nuclear capacity to match market demands, or at developing countries for their distributed power needs. To determine the optimal configuration for IRIS, analysis was undertaken to establish Generation Costs ($/MWh) and Internal Rate of Return (IRR %) to the Utility at alternative power ratings. This was then combined with global market projections for electricity demand out to 2030, segmented intomore » key geographical regions. Finally this information is brought together to form insights, conclusions and recommendations regarding the optimal design. The resultant analysis reveals a single module sized at 335 MWe, with a construction period of 3 years and a 60-year plant life. Individual modules can be installed in a staggered fashion (3 equivalent to 1005 MWe) or built in pairs (2 sets of twin units' equivalent to 1340 MWe). Uncertainty in Market Clearing Price for electricity, Annual Operating Costs and Construction Costs primarily influence lifetime Net Present Values (NPV) and hence IRR % for Utilities. Generation Costs in addition are also influenced by Fuel Costs, Plant Output, Plant Availability and Plant Capacity Factor. Therefore for a site based on 3 single modules, located in North America, Generations Costs of 28.5 $/MWh are required to achieve an IRR of 20%, a level which enables IRIS to compete with all other forms of electricity production. Plant size is critical to commercial success. Sustained (lifetime) high factors for Plant Output, Availability and Capacity Factor are required to achieve a competitive advantage. Modularity offers Utilities the option to match their investments with market conditions, adding additional capacity as and when the circumstances are right. Construction schedule needs to be controlled. There is a clear trade-off between reducing financing charges and optimising revenue streams. (authors)« less
NASA Technical Reports Server (NTRS)
Janich, Karl W.
2005-01-01
The At-Least version of the Generalized Minimum Spanning Tree Problem (L-GMST) is a problem in which the optimal solution connects all defined clusters of nodes in a given network at a minimum cost. The L-GMST is NPHard; therefore, metaheuristic algorithms have been used to find reasonable solutions to the problem as opposed to computationally feasible exact algorithms, which many believe do not exist for such a problem. One such metaheuristic uses a swarm-intelligent Ant Colony System (ACS) algorithm, in which agents converge on a solution through the weighing of local heuristics, such as the shortest available path and the number of agents that recently used a given path. However, in a network using a solution derived from the ACS algorithm, some nodes may move around to different clusters and cause small changes in the network makeup. Rerunning the algorithm from the start would be somewhat inefficient due to the significance of the changes, so a genetic algorithm based on the top few solutions found in the ACS algorithm is proposed to quickly and efficiently adapt the network to these small changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matlin, R. W.
1979-07-10
Tens of millions of the world's poorest farmers currently subsist on small farms below two hectares in size. The increasing cost of animal irrigation coupled with decreasing farm size and the lack of a utility grid or acceptable alternate power sources is causing interest in the use of solar photovoltaics for these very small (subkilowatt) water pumping systems. The attractive combinations of system components (array, pump, motor, storage and controls) have been identified and their interactions characterized in order to optimize overall system efficiency. Computer simulations as well as component tests were made of systems utilizing flat-plate and low-concentration arrays,more » direct-coupled and electronic-impedance-matching controls, fixed and incremental (once or twice a day) tracking, dc and ac motors, and positive-displacement, centrifugal and vertical turbine pumps. The results of these analyses and tests are presented, including water volume pumped as a function of time of day and year, for the locations of Orissa, India and Cairo, Egypt. Finally, a description and operational data are given for a prototype unit that was developed as a result of the previous analyses and tests.« less
NASA Astrophysics Data System (ADS)
Murthy, Uday S.
A variety of Web-based low cost computer-mediated communication (CMC) tools are now available for use by small and medium-sized enterprises (SME). These tools invariably incorporate chat systems that facilitate simultaneous input in synchronous electronic meeting environments, allowing what is referred to as “electronic brainstorming.” Although prior research in information systems (IS) has established that electronic brainstorming can be superior to face-to-face brainstorming, there is a lack of detailed guidance regarding how CMC tools should be optimally configured to foster creativity in SMEs. This paper discusses factors to be considered in using CMC tools for creativity brainstorming and proposes recommendations for optimally configuring CMC tools to enhance creativity in SMEs. The recommendations are based on lessons learned from several recent experimental studies on the use of CMC tools for rich brainstorming tasks that require participants to invoke domain-specific knowledge. Based on a consideration of the advantages and disadvantages of the various configuration options, the recommendations provided can form the basis for selecting a CMC tool for creativity brainstorming or for creating an in-house CMC tool for the purpose.
Economic trade-offs between genetic improvement and longevity in dairy cattle.
De Vries, A
2017-05-01
Genetic improvement in sires used for artificial insemination (AI) is increasing faster compared with a decade ago. The genetic merit of replacement heifers is also increasing faster and the genetic lag with older cows in the herd increases. This may trigger greater cow culling to capture this genetic improvement. On the other hand, lower culling rates are often viewed favorably because the costs and environmental effects of maintaining herd size are generally lower. Thus, there is an economic trade-off between genetic improvement and longevity in dairy cattle. The objective of this study was to investigate the principles, literature, and magnitude of these trade-offs. Data from the Council on Dairy Cattle Breeding show that the estimated breeding value of the trait productive life has increased for 50 yr but the actual time cows spend in the herd has not increased. The average annual herd cull rate remains at approximately 36% and cow longevity is approximately 59 mo. The annual increase in average estimated breeding value of the economic index lifetime net merit of Holstein sires is accelerating from $40/yr when the sire entered AI around 2002 to $171/yr for sires that entered AI around 2012. The expectation is therefore that heifers born in 2015 are approximately $50 more profitable per lactation than heifers born in 2014. Asset replacement theory shows that assets should be replaced sooner when the challenging asset is technically improved. Few studies have investigated the direct effects of genetic improvement on optimal cull rates. A 35-yr-old study found that the economically optimal cull rates were in the range of 25 to 27%, compared with the lowest possible involuntary cull rate of 20%. Only a small effect was observed of using the best surviving dams to generate the replacement heifer calves. Genetic improvement from sires had little effect on the optimal cull rate. Another study that optimized culling decisions for individual cows also showed that the effect of changes in genetic improvement of milk revenue minus feed cost on herd longevity was relatively small. Reduced involuntary cull rates improved profitability, but also increased optimal voluntary culling. Finally, an economically optimal culling model with prices from 2015 confirmed that optimal annual cull rates were insensitive to heifer prices and therefore insensitive to genetic improvement in heifers. In conclusion, genetic improvement is important but does not warrant short cow longevity. Economic cow longevity continues to depends more on cow depreciation than on accelerated genetic improvements in heifers. This is confirmed by old and new studies. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2012 CFR
2012-01-01
... § 107.855 Interest rate ceiling and limitations on fees charged to Small Businesses (“Cost of Money”). “Cost of Money” means the interest and other consideration that you receive from a Small Business. Subject to lower ceilings prescribed by local law, the Cost of Money to the Small Business must not exceed...
Code of Federal Regulations, 2014 CFR
2014-01-01
... § 107.855 Interest rate ceiling and limitations on fees charged to Small Businesses (“Cost of Money”). “Cost of Money” means the interest and other consideration that you receive from a Small Business. Subject to lower ceilings prescribed by local law, the Cost of Money to the Small Business must not exceed...
Code of Federal Regulations, 2013 CFR
2013-01-01
... § 107.855 Interest rate ceiling and limitations on fees charged to Small Businesses (“Cost of Money”). “Cost of Money” means the interest and other consideration that you receive from a Small Business. Subject to lower ceilings prescribed by local law, the Cost of Money to the Small Business must not exceed...
Code of Federal Regulations, 2011 CFR
2011-01-01
... § 107.855 Interest rate ceiling and limitations on fees charged to Small Businesses (“Cost of Money”). “Cost of Money” means the interest and other consideration that you receive from a Small Business. Subject to lower ceilings prescribed by local law, the Cost of Money to the Small Business must not exceed...
Benchmarking U.S. Small Wind Costs with the Distributed Wind Taxonomy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orrell, Alice C.; Poehlman, Eric A.
The objective of this report is to benchmark costs for small wind projects installed in the United States using a distributed wind taxonomy. Consequently, this report is a starting point to help expand the U.S. distributed wind market by informing potential areas for small wind cost-reduction opportunities and providing a benchmark to track future small wind cost-reduction progress.
Unit bias. A new heuristic that helps explain the effect of portion size on food intake.
Geier, Andrew B; Rozin, Paul; Doros, Gheorghe
2006-06-01
People seem to think that a unit of some entity (with certain constraints) is the appropriate and optimal amount. We refer to this heuristic as unit bias. We illustrate unit bias by demonstrating large effects of unit segmentation, a form of portion control, on food intake. Thus, people choose, and presumably eat, much greater weights of Tootsie Rolls and pretzels when offered a large as opposed to a small unit size (and given the option of taking as many units as they choose at no monetary cost). Additionally, they consume substantially more M&M's when the candies are offered with a large as opposed to a small spoon (again with no limits as to the number of spoonfuls to be taken). We propose that unit bias explains why small portion sizes are effective in controlling consumption; in some cases, people served small portions would simply eat additional portions if it were not for unit bias. We argue that unit bias is a general feature in human choice and discuss possible origins of this bias, including consumption norms.
Aircraft Electric Propulsion Systems Applied Research at NASA
NASA Technical Reports Server (NTRS)
Clarke, Sean
2015-01-01
Researchers at NASA are investigating the potential for electric propulsion systems to revolutionize the design of aircraft from the small-scale general aviation sector to commuter and transport-class vehicles. Electric propulsion provides new degrees of design freedom that may enable opportunities for tightly coupled design and optimization of the propulsion system with the aircraft structure and control systems. This could lead to extraordinary reductions in ownership and operating costs, greenhouse gas emissions, and noise annoyance levels. We are building testbeds, high-fidelity aircraft simulations, and the first highly distributed electric inhabited flight test vehicle to begin to explore these opportunities.
Lhassani, A; Rumeau, M; Benjelloun, D; Pontie, M
2001-09-01
Nanofiltration is generally used to separate monovalent ions from divalent ions, but it is also possible to separate ions of the same valency by careful application of the transfer mechanisms involved. Analysis of the retention of halide salts reveals that small ions like fluoride are the best retained, and that this is even more marked under reduced pressure when selectivity is greatest. The selectivity desalination of fluorinated brackish water is hence feasible and drinking water can be produced directly at much lower cost than using reverse osmosis by optimizing the pressure for the type of water treated.
Optical tomograph optimized for tumor detection inside highly absorbent organs
NASA Astrophysics Data System (ADS)
Boutet, Jérôme; Koenig, Anne; Hervé, Lionel; Berger, Michel; Dinten, Jean-Marc; Josserand, Véronique; Coll, Jean-Luc
2011-05-01
This paper presents a tomograph for small animal fluorescence imaging. The compact and cost-effective system described in this article was designed to address the problem of tumor detection inside highly absorbent heterogeneous organs, such as lungs. To validate the tomograph's ability to detect cancerous nodules inside lungs, in vivo tumor growth was studied on seven cancerous mice bearing murine mammary tumors marked with Alexa Fluor 700. They were successively imaged 10, 12, and 14 days after the primary tumor implantation. The fluorescence maps were compared over this time period. As expected, the reconstructed fluorescence increases with the tumor growth stage.
Protein Complex Production from the Drug Discovery Standpoint.
Moarefi, Ismail
2016-01-01
Small molecule drug discovery critically depends on the availability of meaningful in vitro assays to guide medicinal chemistry programs that are aimed at optimizing drug potency and selectivity. As it becomes increasingly evident, most disease relevant drug targets do not act as a single protein. In the body, they are instead generally found in complex with protein cofactors that are highly relevant for their correct function and regulation. This review highlights selected examples of the increasing trend to use biologically relevant protein complexes for rational drug discovery to reduce costly late phase attritions due to lack of efficacy or toxicity.
A Small Particle Solar Receiver for High Temperature Brayton Power Cycles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Fletcher John
The objective of this project is to design, construct, and test at the Sandia NSTTF a revolutionary high temperature air-cooled solar receiver in the multi-MW range that can be used to drive a gas turbine, to generate low-cost electricity at $.06/kWh when considered as part of an optimized CSP combined cycle system. The receiver being developed in this research uses a dilute suspension of selectively absorbing carbon nano-particles to absorb highly concentrated solar flux. The concept of a volumetric, selective, and continually replenishable absorber is unique in the solar field.
Silva, Marcos A R; Mater, Luciana; Souza-Sierra, Maria M; Corrêa, Albertina X R; Sperb, Rafael; Radetski, Claudemir M
2007-08-25
The aim of this study was to propose a profitable destination for an industrial sludge that can cover the wastewater treatment costs of small waste generators. Optimized stabilization/solidification technology was used to treat hazardous waste from an electroplating industry that is currently released untreated to the environment. The stabilized/solidified (S/S) waste product was used as a raw material to build concrete blocks, to be sold as pavement blocks or used in roadbeds and/or parking lots. The quality of the blocks containing a mixture of cement, lime, clay and waste was evaluated by means of leaching and solubility tests according to the current Brazilian waste regulations. Results showed very low metal leachability and solubility of the block constituents, indicating a low environmental impact. Concerning economic benefits from the S/S process and reuse of the resultant product, the cost of untreated heavy metal-containing sludge disposal to landfill is usually on the order of US$ 150-200 per tonne of waste, while 1tonne of concrete roadbed blocks (with 25% of S/S waste constitution) has a value of around US$ 100. The results of this work showed that the cement, clay and lime-based process of stabilization/solidification of hazardous waste sludge is sufficiently effective and economically viable to stimulate the treatment of wastewater from small industrial waste generators.
Low cost Ku-band earth terminals for voice/data/facsimile
NASA Technical Reports Server (NTRS)
Kelley, R. L.
1977-01-01
A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.
NASA Astrophysics Data System (ADS)
Wöffler, T.; Jensen, J.; Schüttrumpf, H.
2017-12-01
Low lying small islands are among the most vulnerable regions worldwide due to the consequences of climate change. The reasons for this are the concentration of infrastructure, geographical features and their small size. Worldwide special forms and adaptations of coastal protection strategies and measures can be found on small islands. In the northfrisian part of the North Sea worldwide unique strategies and measures have been developed in the last centuries due to the geographic location and the isolation during extreme events. One special feature of their coastal protection strategy is the lack of dikes. For this reason, the houses are built on artificial dwelling mounds in order to protect the inhabitants and their goods against frequently occurring inundations during storm surge seasons (up to 30 times a year). The Hallig islands themselves benefit by these inundations due to sediments, which are accumulated on the island's surfaces. This sedimentation has enabled a natural adaption to sea level rise in the past. Nevertheless, the construction methods of the coastal protection measures are mainly based on tradition and the knowledge of the inhabitants. No resilient design approaches and safety standards for these special structures like dwelling mounds and elevated revetments exist today. For this reason, neither a cost efficient construction nor a prioritization of measures is possible. Main part of this paper is the scientific investigation of the existing coastal protection measures with the objective of the development of design approaches and safety standards. The results will optimize the construction of the existing coastal protection measures and can be transferred to other small islands and low lying areas worldwide.
NASA Astrophysics Data System (ADS)
Handford, Matthew L.; Srinivasan, Manoj
2016-02-01
Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.
Optimizing the U.S. Electric System with a High Penetration of Renewables
NASA Astrophysics Data System (ADS)
Corcoran, B. A.; Jacobson, M. Z.
2012-12-01
As renewable energy generators are increasingly being installed throughout the U.S., there is growing interest in interconnecting diverse renewable generators (primarily wind and solar) across large geographic areas through an enhanced transmission system. This reduces variability in the aggregate power output, increases system reliability, and allows for the development of the best overall group of renewable technologies and sites to meet the load. Studies are therefore needed to determine the most efficient and economical plan to achieve large area interconnections in a future electric system with a high penetration of renewables. This research quantifies the effects of aggregating electric load and, separately, electric load together with diverse renewable generation throughout the ten Federal Energy Regulatory Commission (FERC) regions in the contiguous U.S. The effects of aggregating electric load alone -- including generator capacity capital cost savings, load energy shift operating cost savings, reserve requirement cost savings, and transmission costs -- were calculated for various groupings of FERC regions using 2006 data. Transmission costs outweighed cost savings due to aggregation in nearly all cases. East-west transmission layouts had the highest overall cost, and interconnecting ERCOT to adjacent FERC regions resulted in increased costs, both due to limited existing transmission capacity. Scenarios consisting of smaller aggregation groupings had the lowest overall cost. This analysis found no economic case for further aggregation of load alone within the U.S., except possibly in the West and Northwest. If aggregation of electric load is desired, then small, regional consolidations yield the lowest overall system cost. Next, the effects of aggregating electric load together with renewable electricity generation are being quantified through the development and use of an optimization tool in AMPL (A Mathematical Programming Language). This deterministic linear program solves for the least-cost organizational structure and system (generator, transmission, storage, and reserve requirements) for a highly renewable U.S. electric grid. The analysis will 1) examine a highly renewable 2006 electric system, and 2) create a "roadmap" from the existing 2006 system to a highly renewable system in 2030, accounting for projected price and demand changes and generator retirements based on age and environmental regulations. Ideally, results from this study will offer insight for a federal renewable energy policy (such as a renewable portfolio standard) and how to best organize regions for transmission planning.
Optimal management of a stochastically varying population when policy adjustment is costly.
Boettiger, Carl; Bode, Michael; Sanchirico, James N; Lariviere, Jacob; Hastings, Alan; Armsworth, Paul R
2016-04-01
Ecological systems are dynamic and policies to manage them need to respond to that variation. However, policy adjustments will sometimes be costly, which means that fine-tuning a policy to track variability in the environment very tightly will only sometimes be worthwhile. We use a classic fisheries management problem, how to manage a stochastically varying population using annually varying quotas in order to maximize profit, to examine how costs of policy adjustment change optimal management recommendations. Costs of policy adjustment (changes in fishing quotas through time) could take different forms. For example, these costs may respond to the size of the change being implemented, or there could be a fixed cost any time a quota change is made. We show how different forms of policy costs have contrasting implications for optimal policies. Though it is frequently assumed that costs to adjusting policies will dampen variation in the policy, we show that certain cost structures can actually increase variation through time. We further show that failing to account for adjustment costs has a consistently worse economic impact than would assuming these costs are present when they are not.
Evidence for composite cost functions in arm movement planning: an inverse optimal control approach.
Berret, Bastien; Chiovetto, Enrico; Nori, Francesco; Pozzo, Thierry
2011-10-01
An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
NASA Astrophysics Data System (ADS)
Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl
2018-06-01
In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.
NASA Astrophysics Data System (ADS)
Amenda, Lisa; Pfurtscheller, Clemens
2013-04-01
By virtue of augmented settling in hazardous areas and increased asset values, natural disasters such as floods, landslides and rockfalls cause high economic losses in Alpine lateral valleys. Especially in small municipalities, indirect losses, mainly stemming from a breakdown of transport networks, and costs of emergency can reach critical levels. A quantification of these losses is necessary to estimate the worthiness of mitigation measures, to determine the appropriate level of disaster assistance and to improve risk management strategies. There are comprehensive approaches available for assessing direct losses. However, indirect losses and costs of emergency are widely not assessed and the empirical basis for estimating these costs is weak. To address the resulting uncertainties of project appraisals, a standardized methodology has been developed dealing with issues of local economic effects and emergency efforts needed. In our approach, the cost-benefit-analysis for technical mitigation of the Austrian Torrent and Avalanche Control (TAC) will be optimized and extended using the 2005-debris flow as a design event, which struggled a small town in the upper Inn valley in southwest Tyrol (Austria). Thereby, 84 buildings were affected, 430 people were evacuated and due to this, the TAC implemented protection measures for 3.75 million Euros. Upgrading the method of the TAC and analyzing to what extent the cost-benefit-ratio is about to change, is one of the main objectives of this study. For estimating short-run indirect effects and costs of emergency on the local level, data was collected via questionnaires, field mapping, guided interviews, as well as intense literature research. According to this, up-to-date calculation methods were evolved and the cost-benefit-analysis of TAC was recalculated with these new-implemented results. The cost-benefit-ratio will be more precise and specific and hence, the decision, which mitigation alternative will be carried out. Based on this, the worthiness of the mitigation measures can be determined in more detail and the proper level of emergency assistance can be calculated more adequately. By dint of this study, a better data basis will be created evaluating technical and non-technical mitigation measures, which is useful for government agencies, insurance companies and research.
Machine Learning Techniques in Optimal Design
NASA Technical Reports Server (NTRS)
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.
Optical Design Using Small Dedicated Computers
NASA Astrophysics Data System (ADS)
Sinclair, Douglas C.
1980-09-01
Since the time of the 1975 International Lens Design Conference, we have developed a series of optical design programs for Hewlett-Packard desktop computers. The latest programs in the series, OSLO-25G and OSLO-45G, have most of the capabilities of general-purpose optical design programs, including optimization based on exact ray-trace data. The computational techniques used in the programs are similar to ones used in other programs, but the creative environment experienced by a designer working directly with these small dedicated systems is typically much different from that obtained with shared-computer systems. Some of the differences are due to the psychological factors associated with using a system having zero running cost, while others are due to the design of the program, which emphasizes graphical output and ease of use, as opposed to computational speed.
NASA Astrophysics Data System (ADS)
Castanier, Eric; Paterne, Loic; Louis, Céline
2017-09-01
In the nuclear engineering, you have to manage time and precision. Especially in shielding design, you have to be more accurate and efficient to reduce cost (shielding thickness optimization), and for this, you use 3D codes. In this paper, we want to see if we can easily applicate the CADIS methods for design shielding of small pipes which go through large concrete walls. We assess the impact of the WW generated by the 3D-deterministic code ATTILA versus WW directly generated by MCNP (iterative and manual process). The comparison is based on the quality of the convergence (estimated relative error (σ), Variance of Variance (VOV) and Figure of Merit (FOM)), on time (computer time + modelling) and on the implement for the engineer.
Conformal and Spectrally Agile Ultra Wideband Phased Array Antenna for Communication and Sensing
NASA Technical Reports Server (NTRS)
Novak, M.; Alwan, Elias; Miranda, Felix; Volakis, John
2015-01-01
There is a continuing need for reducing size and weight of satellite systems, and is also strong interest to increase the functional role of small- and nano-satellites (for instance SmallSats and CubeSats). To this end, a family of arrays is presented, demonstrating ultra-wideband operation across the numerous satellite communications and sensing frequencies up to the Ku-, Ka-, and Millimeter-Wave bands. An example design is demonstrated to operate from 3.5-18.5 GHz with VSWR2 at broadside, and validated through fabrication of an 8 x 8 prototype. This design is optimized for low cost, using Printed Circuit Board (PCB) fabrication. With the same fabrication technology, scaling is shown to be feasible up to a 9-49 GHz band. Further designs are discussed, which extend this wideband operation beyond the Ka-band, for instance from 20-80 GHz. Finally we will discuss recent efforts in the direct integration of such arrays with digital beamforming back-ends. It will be shown that using a novel on-site coding architecture, orders of magnitude reduction in hardware size, power, and cost is accomplished in this transceiver.
Couto, Maria Claudia Lima; Lange, Liséte Celina; Rosa, Rodrigo de Alvarenga; Couto, Paula Rogeria Lima
2017-12-01
The implementation of reverse logistics systems (RLS) for post-consumer products provides environmental and economic benefits, since it increases recycling potential. However, RLS implantation and consolidation still face problems. The main shortcomings are the high costs and the low expectation of broad implementation worldwide. This paper presents two mathematical models to decide the number and the location of screening centers (SCs) and valorization centers (VCs) to implement reverse logistics of post-consumer packages, defining the optimum territorial arrangements (OTAs), allowing the inclusion of small and medium size municipalities. The paper aims to fill a gap in the literature on RLS location facilities that not only aim at revenue optimization, but also the participation of the population, the involvement of pickers and the service universalization. The results showed that implementation of VCs can lead to revenue/cost ratio higher than 100%. The results of this study can supply companies and government agencies with a global view on the parameters that influence RLS sustainability and help them make decisions about the location of these facilities and the best reverse flows with the social inclusion of pickers and serving the population of small and medium-sized municipalities.
Cost Overrun Optimism: Fact or Fiction
2016-02-29
Base, OH. Homgren, C. T. (1990). In G. Foster (Ed.), Cost accounting : A managerial emphasis (7th ed.). Englewood Cliffs, NJ: Prentice Hall. Morrison... Accounting Office. Gansler, J. S. (1989). Affording defense. Cambridge, MA: The MIT Press. Heise, S. R. (1991). A review of cost performance index...Image designed by Diane Fleischer Cost Overrun Optimism: FACT or FICTION? Maj David D. Christensen, USAF Program managers are advocates by
Ten Berg, J M; Kelder, J C; Suttorp, M J; Mast, E G; Bal, E T; Ernst, J M P G; Plokker, H W M
2002-05-01
Coronary angioplasty frequently creates a thrombogenic surface with subsequent mural thrombosis that may lead to acute complications and possibly stimulates the development of restenosis. Whether coumarins can prevent these complications is unclear. In the Balloon Angioplasty and Anticoagulation Study (BAAS), the effect of coumarins started before the procedure on early and late outcome was studied. Patients were randomised to aspirin only or to aspirin plus coumarins. Half of the patients were randomised to undergo six-month angiographic follow-up. Study medication was started one week before coronary angioplasty and the target international normalised ratio (INR) was 2.1-4.8 during angioplasty and six-month follow-up. 'Optimal' anticoagulation was defined as an INR in the target range for at least 70% of the follow-up time. In addition, cost-effectiveness of coumarin treatment was measured. At one year death, myocardial infarction, target-lesion revascularisation and stroke were observed in 14.3% of the 530 patients randomised to aspirin plus coumarin versus in 20.3% of the 528 patients randomised to aspirin alone (relative risk 0.71; 95% CI 0.54-0.93). The incidence of major bleedings and false aneurysms during hospitalisation was 3.2% and 1.0%, respectively, (relative risk 3.39; 95% CI 1.26-9.11). Optimal anticoagulation was an independent predictor of late thrombotic events (relative risk, 0.33; 95% CI, 0.19-0.57). Quantitative coronary analysis was performed of 301 lesions in the ASA group and of 297 lesions in the coumarin group. At six months, the minimal luminal diameter was similar in the ASA and coumarin group. However, optimal anticoagulation was an independent predictor of angiographic outcome at six months. Optimal anticoagulation led to a 0.21 mm (95% CI: 0.05-0.37) larger MLD as compared with suboptimal anticoagulation whereas aspirin use led to a 0.12 mm (95% CI -0.28-0.04) smaller MLD. When including all costs, the savings associated with coumarin treatment were estimated at € 235 per patient after one year. Coumarin pretreatment reduces early and late events in patients undergoing percutaneous coronary intervention at the expense of a small increase in nonfatal bleeding complications. Furthermore, an optimal level of anticoagulation is associated with a significantly better outcome as compared with a suboptimal level of anticoagulation. In addition, coumarin treatment reduces costs.
Distribution-dependent robust linear optimization with applications to inventory control
Kang, Seong-Cheol; Brisimi, Theodora S.
2014-01-01
This paper tackles linear programming problems with data uncertainty and applies it to an important inventory control problem. Each element of the constraint matrix is subject to uncertainty and is modeled as a random variable with a bounded support. The classical robust optimization approach to this problem yields a solution with guaranteed feasibility. As this approach tends to be too conservative when applications can tolerate a small chance of infeasibility, one would be interested in obtaining a less conservative solution with a certain probabilistic guarantee of feasibility. A robust formulation in the literature produces such a solution, but it does not use any distributional information on the uncertain data. In this work, we show that the use of distributional information leads to an equally robust solution (i.e., under the same probabilistic guarantee of feasibility) but with a better objective value. In particular, by exploiting distributional information, we establish stronger upper bounds on the constraint violation probability of a solution. These bounds enable us to “inject” less conservatism into the formulation, which in turn yields a more cost-effective solution (by 50% or more in some numerical instances). To illustrate the effectiveness of our methodology, we consider a discrete-time stochastic inventory control problem with certain quality of service constraints. Numerical tests demonstrate that the use of distributional information in the robust optimization of the inventory control problem results in 36%–54% cost savings, compared to the case where such information is not used. PMID:26347579
Bordoloi, Shreemoyee; Nath, Suresh K; Gogoi, Sweety; Dutta, Robin K
2013-09-15
A three-step treatment process involving (i) mild alkaline pH-conditioning by NaHCO₃; (ii) oxidation of arsenite and ferrous ions by KMnO₄, itself precipitating as insoluble MnO₂ under the pH condition; and (iii) coagulation by FeCl₃ has been used for simultaneous removal of arsenic and iron ions from water. The treated water is filtered after a residence time of 1-2 h. Laboratory batch experiments were performed to optimize the doses. A field trial was performed with an optimized recipe at 30 households and 5 schools at some highly arsenic affected villages in Assam, India. Simultaneous removals of arsenic from initial 0.1-0.5 mg/L to about 5 μg/L and iron from initial 0.3-5.0 mg/L to less than 0.1 mg/L have been achieved along with final pH between 7.0 and 7.5 after residence time of 1h. The process also removes other heavy elements, if present, without leaving any additional toxic residue. The small quantity of solid sludge containing mainly ferrihydrite with adsorbed arsenate passes the toxicity characteristic leaching procedure (TCLP) test. The estimated recurring cost is approximately USD 0.16 per/m(3) of purified water. A high efficiency, an extremely low cost, safety, non-requirement of power and simplicity of operation make the technique potential for rural application. Copyright © 2013 Elsevier B.V. All rights reserved.
Koelewijn, Anne D; van den Bogert, Antonie J
2016-09-01
Despite having a fully functional knee and hip in both legs, asymmetries in joint moments of the knee and hip are often seen in gait of persons with a unilateral transtibial amputation (TTA), possibly resulting in excessive joint loading. We hypothesize that persons with a TTA can walk with more symmetric joint moments at the cost of increased effort or abnormal kinematics. The hypothesis was tested using predictive simulations of gait. Open loop controls of one gait cycle were found by solving an optimization problem that minimizes a combination of walking effort and tracking error in joint angles, ground reaction force and gait cycle duration. A second objective was added to penalize joint moment asymmetry, creating a multi-objective optimization problem. A Pareto front was constructed by changing the weights of the objectives and three solutions were analyzed to study the effect of increasing joint moment symmetry. When the optimization placed more weight on moment symmetry, walking effort increased and kinematics became less normal, confirming the hypothesis. TTA gait improved with a moderate increase in joint moment symmetry. At a small cost of effort and abnormal kinematics, the peak hip extension moment in the intact leg was decreased significantly, and so was the joint contact force in the knee and hip. Additional symmetry required a significant increase in walking effort and the joint contact forces in both hips became significantly higher than in able-bodied gait. Copyright © 2016 Elsevier B.V. All rights reserved.
Sizing a rainwater harvesting cistern by minimizing costs
NASA Astrophysics Data System (ADS)
Pelak, Norman; Porporato, Amilcare
2016-10-01
Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.
Chavez, Hernan; Castillo-Villar, Krystel; Webb, Erin
2017-08-01
Variability on the physical characteristics of feedstock has a relevant effect on the reactor’s reliability and operating cost. Most of the models developed to optimize biomass supply chains have failed to quantify the effect of biomass quality and preprocessing operations required to meet biomass specifications on overall cost and performance. The Integrated Biomass Supply Analysis and Logistics (IBSAL) model estimates the harvesting, collection, transportation, and storage cost while considering the stochastic behavior of the field-to-biorefinery supply chain. This paper proposes an IBSAL-SimMOpt (Simulation-based Multi-Objective Optimization) method for optimizing the biomass quality and costs associated with the efforts needed to meetmore » conversion technology specifications. The method is developed in two phases. For the first phase, a SimMOpt tool that interacts with the extended IBSAL is developed. For the second phase, the baseline IBSAL model is extended so that the cost for meeting and/or penalization for failing in meeting specifications are considered. The IBSAL-SimMOpt method is designed to optimize quality characteristics of biomass, cost related to activities intended to improve the quality of feedstock, and the penalization cost. A case study based on 1916 farms in Ontario, Canada is considered for testing the proposed method. Analysis of the results demonstrates that this method is able to find a high-quality set of non-dominated solutions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavez, Hernan; Castillo-Villar, Krystel; Webb, Erin
Variability on the physical characteristics of feedstock has a relevant effect on the reactor’s reliability and operating cost. Most of the models developed to optimize biomass supply chains have failed to quantify the effect of biomass quality and preprocessing operations required to meet biomass specifications on overall cost and performance. The Integrated Biomass Supply Analysis and Logistics (IBSAL) model estimates the harvesting, collection, transportation, and storage cost while considering the stochastic behavior of the field-to-biorefinery supply chain. This paper proposes an IBSAL-SimMOpt (Simulation-based Multi-Objective Optimization) method for optimizing the biomass quality and costs associated with the efforts needed to meetmore » conversion technology specifications. The method is developed in two phases. For the first phase, a SimMOpt tool that interacts with the extended IBSAL is developed. For the second phase, the baseline IBSAL model is extended so that the cost for meeting and/or penalization for failing in meeting specifications are considered. The IBSAL-SimMOpt method is designed to optimize quality characteristics of biomass, cost related to activities intended to improve the quality of feedstock, and the penalization cost. A case study based on 1916 farms in Ontario, Canada is considered for testing the proposed method. Analysis of the results demonstrates that this method is able to find a high-quality set of non-dominated solutions.« less
Optimal inventories for overhaul of repairable redundant systems - A Markov decision model
NASA Technical Reports Server (NTRS)
Schaefer, M. K.
1984-01-01
A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.
Rethinking FCV/BEV Vehicle Range: A Consumer Value Trade-off Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhenhong; Greene, David L
2010-01-01
The driving range of FCV and BEV is often analyzed by simple analogy to conventional vehicles without proper consideration of differences in energy storage technology, infrastructure, and market context. This study proposes a coherent framework to optimize the driving range by minimizing costs associated with range, including upfront storage cost, fuel availability cost for FCV and range anxiety cost for BEV. It is shown that the conventional assumption of FCV range can lead to overestimation of FCV market barrier by over $8000 per vehicle in the near-term market. Such exaggeration of FCV market barrier can be avoided with range optimization.more » Compared to the optimal BEV range, the 100-mile range chosen by automakers appears to be near optimal for modest drivers, but far less than optimal for frequent drivers. With range optimization, the probability that the BEV is unable to serve a long-trip day is generally less than 5%, depending on driving intensity. Range optimization can help diversify BEV products for different consumers. It is also demonstrated and argued that the FCV/BEV range should adapt to the technology and infrastructure developments.« less
Analysis and optimization of hybrid electric vehicle thermal management systems
NASA Astrophysics Data System (ADS)
Hamut, H. S.; Dincer, I.; Naterer, G. F.
2014-02-01
In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510
Method for Household Refrigerators Efficiency Increasing
NASA Astrophysics Data System (ADS)
Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.
2017-11-01
The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.
Recursive Optimization of Digital Circuits
1990-12-14
Obverse- Specification . . . A-23 A.14 Non-MDS Optimization of SAMPLE .. .. .. .. .. .. ..... A-24 Appendix B . BORIS Recursive Optimization System...Software ...... B -i B .1 DESIGN.S File . .... .. .. .. .. .. .. .. .. .. ... ... B -2 B .2 PARSE.S File. .. .. .. .. .. .. .. .. ... .. ... .... B -1i B .3...TABULAR.S File. .. .. .. .. .. .. ... .. ... .. ... B -22 B .4 MDS.S File. .. .. .. .. .. .. .. ... .. ... .. ...... B -28 B .5 COST.S File
"Optimal" Size and Schooling: A Relative Concept.
ERIC Educational Resources Information Center
Swanson, Austin D.
Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenhoover, W.A.; Stouffer, M.R.; Withum, J.A.
1994-12-01
The objective of this research project is to develop second-generation duct injection technology as a cost-effective SO{sub 2} control option for the 1990 Clean Air Act Amendments. Research is focused on the Advanced Coolside process, which has shown the potential for achieving the performance targets of 90% SO{sub 2} removal and 60% sorbent utilization. In Subtask 2.2, Design Optimization, process improvement was sought by optimizing sorbent recycle and by optimizing process equipment for reduced cost. The pilot plant recycle testing showed that 90% SO{sub 2} removal could be achieved at sorbent utilizations up to 75%. This testing also showed thatmore » the Advanced Coolside process has the potential to achieve very high removal efficiency (90 to greater than 99%). Two alternative contactor designs were developed, tested and optimized through pilot plant testing; the improved designs will reduce process costs significantly, while maintaining operability and performance essential to the process. Also, sorbent recycle handling equipment was optimized to reduce cost.« less
Optimal design of the satellite constellation arrangement reconfiguration process
NASA Astrophysics Data System (ADS)
Fakoor, Mahdi; Bakhtiari, Majid; Soleymani, Mahshid
2016-08-01
In this article, a novel approach is introduced for the satellite constellation reconfiguration based on Lambert's theorem. Some critical problems are raised in reconfiguration phase, such as overall fuel cost minimization, collision avoidance between the satellites on the final orbital pattern, and necessary maneuvers for the satellites in order to be deployed in the desired position on the target constellation. To implement the reconfiguration phase of the satellite constellation arrangement at minimal cost, the hybrid Invasive Weed Optimization/Particle Swarm Optimization (IWO/PSO) algorithm is used to design sub-optimal transfer orbits for the satellites existing in the constellation. Also, the dynamic model of the problem will be modeled in such a way that, optimal assignment of the satellites to the initial and target orbits and optimal orbital transfer are combined in one step. Finally, we claim that our presented idea i.e. coupled non-simultaneous flight of satellites from the initial orbital pattern will lead to minimal cost. The obtained results show that by employing the presented method, the cost of reconfiguration process is reduced obviously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, T. J.
2014-02-01
The cost of nuclear power is a straightforward yet complicated topic. It is straightforward in that the cost of nuclear power is a function of the cost to build the nuclear power plant, the cost to operate and maintain it, and the cost to provide fuel for it. It is complicated in that some of those costs are not necessarily known, introducing uncertainty into the analysis. For large light water reactor (LWR)-based nuclear power plants, the uncertainty is mainly contained within the cost of construction. The typical costs of operations and maintenance (O&M), as well as fuel, are well knownmore » based on the current fleet of LWRs. However, the last currently operating reactor to come online was Watts Bar 1 in May 1996; thus, the expected construction costs for gigawatt (GW)-class reactors in the United States are based on information nearly two decades old. Extrapolating construction, O&M, and fuel costs from GW-class LWRs to LWR-based small modular reactors (SMRs) introduces even more complication. The per-installed-kilowatt construction costs for SMRs are likely to be higher than those for the GW-class reactors based on the property of the economy of scale. Generally speaking, the economy of scale is the tendency for overall costs to increase slower than the overall production capacity. For power plants, this means that doubling the power production capacity would be expected to cost less than twice as much. Applying this property in the opposite direction, halving the power production capacity would be expected to cost more than half as much. This can potentially make the SMRs less competitive in the electricity market against the GW-class reactors, as well as against other power sources such as natural gas and subsidized renewables. One factor that can potentially aid the SMRs in achieving economic competitiveness is an economy of numbers, as opposed to the economy of scale, associated with learning curves. The basic concept of the learning curve is that the more a new process is repeated, the more efficient the process can be made. Assuming that efficiency directly relates to cost means that the more a new process is repeated successfully and efficiently, the less costly the process can be made. This factor ties directly into the factory fabrication and modularization aspect of the SMR paradigm—manufacturing serial, standardized, identical components for use in nuclear power plants can allow the SMR industry to use the learning curves to predict and optimize deployment costs.« less
Optimal timing for managed relocation of species faced with climate change
NASA Astrophysics Data System (ADS)
McDonald-Madden, Eve; Runge, Michael C.; Possingham, Hugh P.; Martin, Tara G.
2011-08-01
Managed relocation is a controversial climate-adaptation strategy to combat negative climate change impacts on biodiversity. While the scientific community debates the merits of managed relocation, species are already being moved to new areas predicted to be more suitable under climate change. To inform these moves, we construct a quantitative decision framework to evaluate the timing of relocation in the face of climate change. We find that the optimal timing depends on many factors, including the size of the population, the demographic costs of translocation and the expected carrying capacities over time in the source and destination habitats. In some settings, such as when a small population would benefit from time to grow before risking translocation losses, haste is ill advised. We also find that active adaptive management is valuable when the effect of climate change on source habitat is uncertain, and leads to delayed movement.
Design of a portable artificial heart drive system based on efficiency analysis.
Kitamura, T
1986-11-01
This paper discusses a computer simulation of a pneumatic portable piston-type artificial heart drive system with a linear d-c-motor. The purpose of the design is to obtain an artificial heart drive system with high efficiency and small dimensions to enhance portability. The design employs two factors contributing the total efficiency of the drive system. First, the dimensions of the pneumatic actuator were optimized under a cost function of the total efficiency. Second, the motor performance was studied in terms of efficiency. More than 50 percent of the input energy of the actuator with practical loads is consumed in the armature circuit in all linear d-c-motors with brushes. An optimal design is: the piston cross-sectional area of 10.5 cm2 cylinder longitudinal length of 10 cm. The total efficiency could be up to 25 percent by improving the gasket to reduce the frictional force.
Rapid and Facile Microwave-Assisted Surface Chemistry for Functionalized Microarray Slides
Lee, Jeong Heon; Hyun, Hoon; Cross, Conor J.; Henary, Maged; Nasr, Khaled A.; Oketokoun, Rafiou; Choi, Hak Soo; Frangioni, John V.
2011-01-01
We describe a rapid and facile method for surface functionalization and ligand patterning of glass slides based on microwave-assisted synthesis and a microarraying robot. Our optimized reaction enables surface modification 42-times faster than conventional techniques and includes a carboxylated self-assembled monolayer, polyethylene glycol linkers of varying length, and stable amide bonds to small molecule, peptide, or protein ligands to be screened for binding to living cells. We also describe customized slide racks that permit functionalization of 100 slides at a time to produce a cost-efficient, highly reproducible batch process. Ligand spots can be positioned on the glass slides precisely using a microarraying robot, and spot size adjusted for any desired application. Using this system, we demonstrate live cell binding to a variety of ligands and optimize PEG linker length. Taken together, the technology we describe should enable high-throughput screening of disease-specific ligands that bind to living cells. PMID:23467787
Optimal control of malaria: combining vector interventions and drug therapies.
Khamis, Doran; El Mouden, Claire; Kura, Klodeta; Bonsall, Michael B
2018-04-24
The sterile insect technique and transgenic equivalents are considered promising tools for controlling vector-borne disease in an age of increasing insecticide and drug-resistance. Combining vector interventions with artemisinin-based therapies may achieve the twin goals of suppressing malaria endemicity while managing artemisinin resistance. While the cost-effectiveness of these controls has been investigated independently, their combined usage has not been dynamically optimized in response to ecological and epidemiological processes. An optimal control framework based on coupled models of mosquito population dynamics and malaria epidemiology is used to investigate the cost-effectiveness of combining vector control with drug therapies in homogeneous environments with and without vector migration. The costs of endemic malaria are weighed against the costs of administering artemisinin therapies and releasing modified mosquitoes using various cost structures. Larval density dependence is shown to reduce the cost-effectiveness of conventional sterile insect releases compared with transgenic mosquitoes with a late-acting lethal gene. Using drug treatments can reduce the critical vector control release ratio necessary to cause disease fadeout. Combining vector control and drug therapies is the most effective and efficient use of resources, and using optimized implementation strategies can substantially reduce costs.
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
Sabatini, Linda M; Mathews, Charles; Ptak, Devon; Doshi, Shivang; Tynan, Katherine; Hegde, Madhuri R; Burke, Tara L; Bossler, Aaron D
2016-05-01
The increasing use of advanced nucleic acid sequencing technologies for clinical diagnostics and therapeutics has made vital understanding the costs of performing these procedures and their value to patients, providers, and payers. The Association for Molecular Pathology invested in a cost and value analysis of specific genomic sequencing procedures (GSPs) newly coded by the American Medical Association Current Procedural Terminology Editorial Panel. Cost data and work effort, including the development and use of data analysis pipelines, were gathered from representative laboratories currently performing these GSPs. Results were aggregated to generate representative cost ranges given the complexity and variability of performing the tests. Cost-impact models for three clinical scenarios were generated with assistance from key opinion leaders: impact of using a targeted gene panel in optimizing care for patients with advanced non-small-cell lung cancer, use of a targeted gene panel in the diagnosis and management of patients with sensorineural hearing loss, and exome sequencing in the diagnosis and management of children with neurodevelopmental disorders of unknown genetic etiology. Each model demonstrated value by either reducing health care costs or identifying appropriate care pathways. The templates generated will aid laboratories in assessing their individual costs, considering the value structure in their own patient populations, and contributing their data to the ongoing dialogue regarding the impact of GSPs on improving patient care. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey Dewayne
2004-01-01
The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.
Solving large scale unit dilemma in electricity system by applying commutative law
NASA Astrophysics Data System (ADS)
Legino, Supriadi; Arianto, Rakhmat
2018-03-01
The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.
Alejo-Alvarez, Luz; Guzmán-Fierro, Víctor; Fernández, Katherina; Roeckel, Marlene
2016-11-01
A full-scale process for the treatment of 80 tons per day of poultry manure was designed and optimized. A total ammonia nitrogen (TAN) balance was performed at steady state, considering the stoichiometry and the kinetic data from the anaerobic digestion and the anaerobic ammonia oxidation. The equipment, reactor design, investment costs, and operational costs were considered. The volume and cost objective functions optimized the process in terms of three variables: the water recycle ratio, the protein conversion during AD, and the TAN conversion in the process. The processes were compared with and without water recycle; savings of 70% and 43% in the annual fresh water consumption and the heating costs, respectively, were achieved. The optimal process complies with the Chilean environmental legislation limit of 0.05 g total nitrogen/L.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Marseille, Elliot; Dandona, Lalit; Marshall, Nell; Gaist, Paul; Bautista-Arredondo, Sergio; Rollins, Brandi; Bertozzi, Stefano M; Coovadia, Jerry; Saba, Joseph; Lioznov, Dmitry; Du Plessis, Jo-Ann; Krupitsky, Evgeny; Stanley, Nicci; Over, Mead; Peryshkina, Alena; Kumar, S G Prem; Muyingo, Sowedi; Pitter, Christian; Lundberg, Mattias; Kahn, James G
2007-07-12
Economic theory and limited empirical data suggest that costs per unit of HIV prevention program output (unit costs) will initially decrease as small programs expand. Unit costs may then reach a nadir and start to increase if expansion continues beyond the economically optimal size. Information on the relationship between scale and unit costs is critical to project the cost of global HIV prevention efforts and to allocate prevention resources efficiently. The "Prevent AIDS: Network for Cost-Effectiveness Analysis" (PANCEA) project collected 2003 and 2004 cost and output data from 206 HIV prevention programs of six types in five countries. The association between scale and efficiency for each intervention type was examined for each country. Our team characterized the direction, shape, and strength of this association by fitting bivariate regression lines to scatter plots of output levels and unit costs. We chose the regression forms with the highest explanatory power (R2). Efficiency increased with scale, across all countries and interventions. This association varied within intervention and within country, in terms of the range in scale and efficiency, the best fitting regression form, and the slope of the regression. The fraction of variation in efficiency explained by scale ranged from 26-96%. Doubling in scale resulted in reductions in unit costs averaging 34.2% (ranging from 2.4% to 58.0%). Two regression trends, in India, suggested an inflection point beyond which unit costs increased. Unit costs decrease with scale across a wide range of service types and volumes. These country and intervention-specific findings can inform projections of the global cost of scaling up HIV prevention efforts.
Conserving rare species can have high opportunity costs for common species.
Neeson, Thomas M; Doran, Patrick J; Ferris, Michael C; Fitzpatrick, Kimberly B; Herbert, Matthew; Khoury, Mary; Moody, Allison T; Ross, Jared; Yacobson, Eugene; McIntyre, Peter B
2018-04-13
Conservation practitioners face difficult choices in apportioning limited resources between rare species (to ensure their existence) and common species (to ensure their abundance and ecosystem contributions). We quantified the opportunity costs of conserving rare species of migratory fishes in the context of removing dams and retrofitting road culverts across 1,883 tributaries of the North American Great Lakes. Our optimization models show that maximizing total habitat gains across species can be very efficient in terms of benefits achieved per dollar spent, but disproportionately benefits common species. Conservation approaches that target rare species, or that ensure some benefits for every species (i.e., complementarity) enable strategic allocation of resources among species but reduce aggregate habitat gains. Thus, small habitat gains for the rarest species necessarily come at the expense of more than 20 times as much habitat for common ones. These opportunity costs are likely to occur in many ecosystems because range limits and conservation costs often vary widely among species. Given that common species worldwide are declining more rapidly than rare ones within major taxa, our findings provide incentive for triage among multiple worthy conservation targets. © 2018 John Wiley & Sons Ltd.
A consensus opinion model based on the evolutionary game
NASA Astrophysics Data System (ADS)
Yang, Han-Xin
2016-08-01
We propose a consensus opinion model based on the evolutionary game. In our model, both of the two connected agents receive a benefit if they have the same opinion, otherwise they both pay a cost. Agents update their opinions by comparing payoffs with neighbors. The opinion of an agent with higher payoff is more likely to be imitated. We apply this model in scale-free networks with tunable degree distribution. Interestingly, we find that there exists an optimal ratio of cost to benefit, leading to the shortest consensus time. Qualitative analysis is obtained by examining the evolution of the opinion clusters. Moreover, we find that the consensus time decreases as the average degree of the network increases, but increases with the noise introduced to permit irrational choices. The dependence of the consensus time on the network size is found to be a power-law form. For small or larger ratio of cost to benefit, the consensus time decreases as the degree exponent increases. However, for moderate ratio of cost to benefit, the consensus time increases with the degree exponent. Our results may provide new insights into opinion dynamics driven by the evolutionary game theory.
Setzler, Brian P; Zhuang, Zhongbin; Wittkopf, Jarrid A; Yan, Yushan
2016-12-06
Fuel cells are the zero-emission automotive power source that best preserves the advantages of gasoline automobiles: low upfront cost, long driving range and fast refuelling. To make fuel-cell cars a reality, the US Department of Energy has set a fuel cell system cost target of US$30 kW -1 in the long-term, which equates to US$2,400 per vehicle, excluding several major powertrain components (in comparison, a basic, but complete, internal combustion engine system costs approximately US$3,000). To date, most research for automotive applications has focused on proton exchange membrane fuel cells (PEMFCs), because these systems have demonstrated the highest power density. Recently, however, an alternative technology, hydroxide exchange membrane fuel cells (HEMFCs), has gained significant attention, because of the possibility to use stable platinum-group-metal-free catalysts, with inherent, long-term cost advantages. In this Perspective, we discuss the cost profile of PEMFCs and the advantages offered by HEMFCs. In particular, we discuss catalyst development needs for HEMFCs and set catalyst activity targets to achieve performance parity with state-of-the-art automotive PEMFCs. Meeting these targets requires careful optimization of nanostructures to pack high surface areas into a small volume, while maintaining high area-specific activity and favourable pore-transport properties.
NASA Astrophysics Data System (ADS)
Setzler, Brian P.; Zhuang, Zhongbin; Wittkopf, Jarrid A.; Yan, Yushan
2016-12-01
Fuel cells are the zero-emission automotive power source that best preserves the advantages of gasoline automobiles: low upfront cost, long driving range and fast refuelling. To make fuel-cell cars a reality, the US Department of Energy has set a fuel cell system cost target of US$30 kW-1 in the long-term, which equates to US$2,400 per vehicle, excluding several major powertrain components (in comparison, a basic, but complete, internal combustion engine system costs approximately US$3,000). To date, most research for automotive applications has focused on proton exchange membrane fuel cells (PEMFCs), because these systems have demonstrated the highest power density. Recently, however, an alternative technology, hydroxide exchange membrane fuel cells (HEMFCs), has gained significant attention, because of the possibility to use stable platinum-group-metal-free catalysts, with inherent, long-term cost advantages. In this Perspective, we discuss the cost profile of PEMFCs and the advantages offered by HEMFCs. In particular, we discuss catalyst development needs for HEMFCs and set catalyst activity targets to achieve performance parity with state-of-the-art automotive PEMFCs. Meeting these targets requires careful optimization of nanostructures to pack high surface areas into a small volume, while maintaining high area-specific activity and favourable pore-transport properties.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Gorelick, Steven M.
2005-03-01
We propose a framework that combines simulation optimization with Bayesian decision analysis to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas. A stochastic simulation optimization management model is employed to plan regionally distributed groundwater pumping while preserving the hydroecological balance in wetland areas. Because predictions made by an aquifer model are uncertain, groundwater supply systems operate below maximum yield. Collecting data from the groundwater system can potentially reduce predictive uncertainty and increase safe water production. The price paid for improvement in water management is the cost of collecting the additional data. Efficient data collection using Bayesian decision analysis proceeds in three stages: (1) The prior analysis determines the optimal pumping scheme and profit from water sales on the basis of known information. (2) The preposterior analysis estimates the optimal measurement locations and evaluates whether each sequential measurement will be cost-effective before it is taken. (3) The posterior analysis then revises the prior optimal pumping scheme and consequent profit, given the new information. Stochastic simulation optimization employing a multiple-realization approach is used to determine the optimal pumping scheme in each of the three stages. The cost of new data must not exceed the expected increase in benefit obtained in optimal groundwater exploitation. An example based on groundwater management practices in Florida aimed at wetland protection showed that the cost of data collection more than paid for itself by enabling a safe and reliable increase in production.
Economic Evaluation of Manitoba Health Lines in the Management of Congestive Heart Failure
Cui, Yang; Doupe, Malcolm; Katz, Alan; Nyhof, Paul; Forget, Evelyn L.
2013-01-01
Objective: This one-year study investigated whether the Manitoba Provincial Health Contact program for congestive heart failure (CHF) is a cost-effective intervention relative to the standard treatment. Design: Individual patient-level, randomized clinical trial of cost-effective model using data from the Health Research Data Repository at the Manitoba Centre for Health Policy, University of Manitoba. Methods: A total of 179 patients aged 40 and over with a diagnosis of CHF levels II to IV were recruited from Winnipeg and Central Manitoba and randomized into three treatment groups: one receiving standard care, a second receiving Health Lines (HL) intervention and a third receiving Health Lines intervention plus in-house monitoring (HLM). A cost-effectiveness study was conducted in which outcomes were measured in terms of QALYs derived from the SF-36 and costs using 2005 Canadian dollars. Costs included intervention and healthcare utilization. Bootstrap-resampled incremental cost-effectiveness ratios were computed to take into account the uncertainty related to small sample size. Results: The total per-patient mean costs (including intervention cost) were not significantly different between study groups. Both interventions (HL and HLM) cost less and are more effective than standard care, with HL able to produce an additional QALY relative to HLM for $2,975. The sensitivity analysis revealed that there is an 85.8% probability that HL is cost-effective if decision-makers are willing to pay $50,000. Conclusion: Findings demonstrate that the HL intervention from the Manitoba Provincial Health Contact program for CHF is an optimal intervention strategy for CHF management compared to standard care and HLM. PMID:24359716
Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions
NASA Technical Reports Server (NTRS)
Akgun, Mehmet A.; Haftka, Raphael T.
2000-01-01
The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system.. Shennan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this (Akgun et al., 1998b). SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.
Optimal joint management of a coastal aquifer and a substitute resource
NASA Astrophysics Data System (ADS)
Moreaux, M.; Reynaud, A.
2004-06-01
This article characterizes the optimal joint management of a coastal aquifer and a costly water substitute. For this purpose we use a mathematical representation of the aquifer that incorporates the displacement of the interface between the seawater and the freshwater of the aquifer. We identify the spatial cost externalities created by users on each other and we show that the optimal water supply depends on the location of users. Users located in the coastal zone exclusively use the costly substitute. Those located in the more upstream area are supplied from the aquifer. At the optimum their withdrawal must take into account the cost externalities they generate on users located downstream. Last, users located in a median zone use the aquifer with a surface transportation cost. We show that the optimum can be implemented in a decentralized economy through a very simple Pigouvian tax. Finally, the optimal and decentralized extraction policies are simulated on a very simple example.
NASA Astrophysics Data System (ADS)
Shah, Rahul H.
Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.
NASA Astrophysics Data System (ADS)
Jolanta Walery, Maria
2017-12-01
The article describes optimization studies aimed at analysing the impact of capital and current costs changes of medical waste incineration on the cost of the system management and its structure. The study was conducted on the example of an analysis of the system of medical waste management in the Podlaskie Province, in north-eastern Poland. The scope of operational research carried out under the optimization study was divided into two stages of optimization calculations with assumed technical and economic parameters of the system. In the first stage, the lowest cost of functioning of the analysed system was generated, whereas in the second one the influence of the input parameter of the system, i.e. capital and current costs of medical waste incineration on economic efficiency index (E) and the spatial structure of the system was determined. Optimization studies were conducted for the following cases: with a 25% increase in capital and current costs of incineration process, followed by 50%, 75% and 100% increase. As a result of the calculations, the highest cost of system operation was achieved at the level of 3143.70 PLN/t with the assumption of 100% increase in capital and current costs of incineration process. There was an increase in the economic efficiency index (E) by about 97% in relation to run 1.
NASA Astrophysics Data System (ADS)
Chintalapudi, V. S.; Sirigiri, Sivanagaraju
2017-04-01
In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
NASA Astrophysics Data System (ADS)
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Internally insulated thermal storage system development program
NASA Technical Reports Server (NTRS)
Scott, O. L.
1980-01-01
A cost effective thermal storage system for a solar central receiver power system using molten salt stored in internally insulated carbon steel tanks is described. Factors discussed include: testing of internal insulation materials in molten salt; preliminary design of storage tanks, including insulation and liner installation; optimization of the storage configuration; and definition of a subsystem research experiment to demonstrate the system. A thermal analytical model and analysis of a thermocline tank was performed. Data from a present thermocline test tank was compared to gain confidence in the analytical approach. A computer analysis of the various storage system parameters (insulation thickness, number of tanks, tank geometry, etc.,) showed that (1) the most cost-effective configuration was a small number of large cylindrical tanks, and (2) the optimum is set by the mechanical constraints of the system, such as soil bearing strength and tank hoop stress, not by the economics.
Internally insulated thermal storage system development program
NASA Astrophysics Data System (ADS)
Scott, O. L.
1980-03-01
A cost effective thermal storage system for a solar central receiver power system using molten salt stored in internally insulated carbon steel tanks is described. Factors discussed include: testing of internal insulation materials in molten salt; preliminary design of storage tanks, including insulation and liner installation; optimization of the storage configuration; and definition of a subsystem research experiment to demonstrate the system. A thermal analytical model and analysis of a thermocline tank was performed. Data from a present thermocline test tank was compared to gain confidence in the analytical approach. A computer analysis of the various storage system parameters (insulation thickness, number of tanks, tank geometry, etc.,) showed that (1) the most cost-effective configuration was a small number of large cylindrical tanks, and (2) the optimum is set by the mechanical constraints of the system, such as soil bearing strength and tank hoop stress, not by the economics.
Evolving Righteousness in a Corrupt World
Duéñez-Guzmán, Edgar A.; Sadedin, Suzanne
2012-01-01
Punishment offers a powerful mechanism for the maintenance of cooperation in human and animal societies, but the maintenance of costly punishment itself remains problematic. Game theory has shown that corruption, where punishers can defect without being punished themselves, may sustain cooperation. However, in many human societies and some insect ones, high levels of cooperation coexist with low levels of corruption, and such societies show greater wellbeing than societies with high corruption. Here we show that small payments from cooperators to punishers can destabilize corrupt societies and lead to the spread of punishment without corruption (righteousness). Righteousness can prevail even in the face of persistent power inequalities. The resultant righteous societies are highly stable and have higher wellbeing than corrupt ones. This result may help to explain the persistence of costly punishing behavior, and indicates that corruption is a sub-optimal tool for maintaining cooperation in human societies. PMID:22984510
Langevin, Stanley A.; Bent, Zachary W.; Solberg, Owen D.; Curtis, Deanna J.; Lane, Pamela D.; Williams, Kelly P.; Schoeniger, Joseph S.; Sinha, Anupama; Lane, Todd W.; Branda, Steven S.
2013-01-01
Use of second generation sequencing (SGS) technologies for transcriptional profiling (RNA-Seq) has revolutionized transcriptomics, enabling measurement of RNA abundances with unprecedented specificity and sensitivity and the discovery of novel RNA species. Preparation of RNA-Seq libraries requires conversion of the RNA starting material into cDNA flanked by platform-specific adaptor sequences. Each of the published methods and commercial kits currently available for RNA-Seq library preparation suffers from at least one major drawback, including long processing times, large starting material requirements, uneven coverage, loss of strand information and high cost. We report the development of a new RNA-Seq library preparation technique that produces representative, strand-specific RNA-Seq libraries from small amounts of starting material in a fast, simple and cost-effective manner. Additionally, we have developed a new quantitative PCR-based assay for precisely determining the number of PCR cycles to perform for optimal enrichment of the final library, a key step in all SGS library preparation workflows. PMID:23558773
Lead (Pb) Hohlraum: Target for Inertial Fusion Energy
Ross, J. S.; Amendt, P.; Atherton, L. J.; Dunne, M.; Glenzer, S. H.; Lindl, J. D.; Meeker, D.; Moses, E. I.; Nikroo, A.; Wallace, R.
2013-01-01
Recent progress towards demonstrating inertial confinement fusion (ICF) ignition at the National Ignition Facility (NIF) has sparked wide interest in Laser Inertial Fusion Energy (LIFE) for carbon-free large-scale power generation. A LIFE-based fleet of power plants promises clean energy generation with no greenhouse gas emissions and a virtually limitless, widely available thermonuclear fuel source. For the LIFE concept to be viable, target costs must be minimized while the target material efficiency or x-ray albedo is optimized. Current ICF targets on the NIF utilize a gold or depleted uranium cylindrical radiation cavity (hohlraum) with a plastic capsule at the center that contains the deuterium and tritium fuel. Here we show a direct comparison of gold and lead hohlraums in efficiently ablating deuterium-filled plastic capsules with soft x rays. We report on lead hohlraum performance that is indistinguishable from gold, yet costing only a small fraction. PMID:23486285
Lead (Pb) hohlraum: target for inertial fusion energy.
Ross, J S; Amendt, P; Atherton, L J; Dunne, M; Glenzer, S H; Lindl, J D; Meeker, D; Moses, E I; Nikroo, A; Wallace, R
2013-01-01
Recent progress towards demonstrating inertial confinement fusion (ICF) ignition at the National Ignition Facility (NIF) has sparked wide interest in Laser Inertial Fusion Energy (LIFE) for carbon-free large-scale power generation. A LIFE-based fleet of power plants promises clean energy generation with no greenhouse gas emissions and a virtually limitless, widely available thermonuclear fuel source. For the LIFE concept to be viable, target costs must be minimized while the target material efficiency or x-ray albedo is optimized. Current ICF targets on the NIF utilize a gold or depleted uranium cylindrical radiation cavity (hohlraum) with a plastic capsule at the center that contains the deuterium and tritium fuel. Here we show a direct comparison of gold and lead hohlraums in efficiently ablating deuterium-filled plastic capsules with soft x rays. We report on lead hohlraum performance that is indistinguishable from gold, yet costing only a small fraction.
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2014-05-01
Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.
Neutron optics concept for the materials engineering diffractometer at the ESS
NASA Astrophysics Data System (ADS)
Šaroun, J.; Fenske, J.; Rouijaa, M.; Beran, P.; Navrátil, J.; Lukáš, P.; Schreyer, A.; Strobl, M.
2016-09-01
The Beamline for European Materials Engineering Research (BEER) has been recently proposed to be built at the European Spallation Source (ESS). The presented concept of neutron delivery optics for this instrument addresses the problems of bi-spectral beam extraction from a small moderator, optimization of neutron guides profile for long-range neutron transport and focusing at the sample under various constraints. They include free space before and after the guides, a narrow guide section with gaps for choppers, closing of direct line of sight and cost reduction by optimization of the guides cross-section and coating. A system of slits and exchangeable focusing optics is proposed in order to match various wavelength resolution options provided by the pulse shaping and modulation choppers, which permits to efficiently trade resolution for intensity in a wide range. Simulated performance characteristics such as brilliance transfer ratio are complemented by the analysis of the histories of “useful” neutrons obtained by back tracing neutrons hitting the sample, which helps to optimize some of the neutron guide parameters such as supermirror coating.
Microfluidics: a transformational tool for nanomedicine development and production.
Garg, Shyam; Heuck, Gesine; Ip, Shell; Ramsay, Euan
2016-11-01
Microfluidic devices are mircoscale fluidic circuits used to manipulate liquids at the nanoliter scale. The ability to control the mixing of fluids and the continuous nature of the process make it apt for solvent/antisolvent precipitation of drug-delivery nanoparticles. This review describes the use of numerous microfluidic designs for the formulation and production of lipid nanoparticles, liposomes and polymer nanoparticles to encapsulate and deliver small molecule or genetic payloads. The advantages of microfluidics are illustrated through examples from literature comparing conventional processes such as beaker and T-tube mixing to microfluidic approaches. Particular emphasis is placed on examples of microfluidic nanoparticle formulations that have been tested in vitro and in vivo. Fine control of process parameters afforded by microfluidics, allows unprecedented optimization of nanoparticle quality and encapsulation efficiency. Automation improves the reproducibility and optimization of formulations. Furthermore, the continuous nature of the microfluidic process is inherently scalable, allowing optimization at low volumes, which is advantageous with scarce or costly materials, as well as scale-up through process parallelization. Given these advantages, microfluidics is poised to become the new paradigm for nanomedicine formulation and production.
Arterial cannula shape optimization by means of the rotational firefly algorithm
NASA Astrophysics Data System (ADS)
Tesch, K.; Kaczorowska, K.
2016-03-01
This article presents global optimization results of arterial cannula shapes by means of the newly modified firefly algorithm. The search for the optimal arterial cannula shape is necessary in order to minimize losses and prepare the flow that leaves the circulatory support system of a ventricle (i.e. blood pump) before it reaches the heart. A modification of the standard firefly algorithm, the so-called rotational firefly algorithm, is introduced. It is shown that the rotational firefly algorithm allows for better exploration of search spaces which results in faster convergence and better solutions in comparison with its standard version. This is particularly pronounced for smaller population sizes. Furthermore, it maintains greater diversity of populations for a longer time. A small population size and a low number of iterations are necessary to keep to a minimum the computational cost of the objective function of the problem, which comes from numerical solution of the nonlinear partial differential equations. Moreover, both versions of the firefly algorithm are compared to the state of the art, namely the differential evolution and covariance matrix adaptation evolution strategies.
Daubechies wavelets for linear scaling density functional theory.
Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan
2014-05-28
We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.
Cell transmission model of dynamic assignment for urban rail transit networks.
Xu, Guangming; Zhao, Shuo; Shi, Feng; Zhang, Feilian
2017-01-01
For urban rail transit network, the space-time flow distribution can play an important role in evaluating and optimizing the space-time resource allocation. For obtaining the space-time flow distribution without the restriction of schedules, a dynamic assignment problem is proposed based on the concept of continuous transmission. To solve the dynamic assignment problem, the cell transmission model is built for urban rail transit networks. The priority principle, queuing process, capacity constraints and congestion effects are considered in the cell transmission mechanism. Then an efficient method is designed to solve the shortest path for an urban rail network, which decreases the computing cost for solving the cell transmission model. The instantaneous dynamic user optimal state can be reached with the method of successive average. Many evaluation indexes of passenger flow can be generated, to provide effective support for the optimization of train schedules and the capacity evaluation for urban rail transit network. Finally, the model and its potential application are demonstrated via two numerical experiments using a small-scale network and the Beijing Metro network.
Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process.
Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S; Jazar, Reza N; Khayyam, Hamid
2018-03-05
To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large.
Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process
Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S.; Jazar, Reza N.; Khayyam, Hamid
2018-01-01
To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large. PMID:29510592
NASA Astrophysics Data System (ADS)
Apribowo, Chico Hermanu Brillianto; Ibrahim, Muhammad Hamka; Wicaksono, F. X. Rian
2018-02-01
The growing burden of the load and the complexity of the power system has had an impact on the need for optimization of power system operation. Optimal power flow (OPF) with optimal location placement and rating of thyristor controlled series capacitor (TCSC) is an effective solution used to determine the economic cost of operating the plant and regulate the power flow in the power system. The purpose of this study is to minimize the total cost of generation by placing the location and the optimal rating of TCSC using genetic algorithm-design of experiment techniques (GA-DOE). Simulation on Java-Bali system 500 kV with the amount of TCSC used by 5 compensator, the proposed method can reduce the generation cost by 0.89% compared to OPF without using TCSC.
Mlynek, Georg; Lehner, Anita; Neuhold, Jana; Leeb, Sarah; Kostan, Julius; Charnagalov, Alexej; Stolt-Bergner, Peggy; Djinović-Carugo, Kristina; Pinotsis, Nikos
2014-06-01
Expression in Escherichia coli represents the simplest and most cost effective means for the production of recombinant proteins. This is a routine task in structural biology and biochemistry where milligrams of the target protein are required in high purity and monodispersity. To achieve these criteria, the user often needs to screen several constructs in different expression and purification conditions in parallel. We describe a pipeline, implemented in the Center for Optimized Structural Studies, that enables the systematic screening of expression and purification conditions for recombinant proteins and relies on a series of logical decisions. We first use bioinformatics tools to design a series of protein fragments, which we clone in parallel, and subsequently screen in small scale for optimal expression and purification conditions. Based on a scoring system that assesses soluble expression, we then select the top ranking targets for large-scale purification. In the establishment of our pipeline, emphasis was put on streamlining the processes such that it can be easily but not necessarily automatized. In a typical run of about 2 weeks, we are able to prepare and perform small-scale expression screens for 20-100 different constructs followed by large-scale purification of at least 4-6 proteins. The major advantage of our approach is its flexibility, which allows for easy adoption, either partially or entirely, by any average hypothesis driven laboratory in a manual or robot-assisted manner.
Asnoune, M; Abdelmalek, F; Djelloul, A; Mesghouni, K; Addou, A
2016-11-01
In household waste matters, the objective is always to conceive an optimal integrated system of management, where the terms 'optimal' and 'integrated' refer generally to a combination between the waste and the techniques of treatment, valorization and elimination, which often aim at the lowest possible cost. The management optimization of household waste using operational methodologies has not yet been applied in any Algerian district. We proposed an optimization of the valorization of household waste in Tiaret city in order to lower the total management cost. The methodology is modelled by non-linear mathematical equations using 28 variables of decision and aims to assign optimally the seven components of household waste (i.e. plastic, cardboard paper, glass, metals, textiles, organic matter and others) among four centres of treatment [i.e. waste to energy (WTE) or incineration, composting (CM), anaerobic digestion (ANB) or methanization and landfilling (LF)]. The analysis of the obtained results shows that the variation of total cost is mainly due to the assignment of waste among the treatment centres and that certain treatment cannot be applied to household waste in Tiaret city. On the other hand, certain techniques of valorization have been favoured by the optimization. In this work, four scenarios have been proposed to optimize the system cost, where the modelling shows that the mixed scenario (the three treatment centres CM, ANB, LF) suggests a better combination of technologies of waste treatment, with an optimal solution for the system (cost and profit). © The Author(s) 2016.
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng
2015-01-01
Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264
Fairness in optimizing bus-crew scheduling process.
Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei
2017-01-01
This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.
Fung, Monica; Kim, Jane; Marty, Francisco M; Schwarzinger, Michaël; Koo, Sophia
2015-01-01
Invasive fungal disease (IFD) causes significant morbidity and mortality in hematologic malignancy patients with high-risk febrile neutropenia (FN). These patients therefore often receive empirical antifungal therapy. Diagnostic test-guided pre-emptive antifungal therapy has been evaluated as an alternative treatment strategy in these patients. We conducted an electronic search for literature comparing empirical versus pre-emptive antifungal strategies in FN among adult hematologic malignancy patients. We systematically reviewed 9 studies, including randomized-controlled trials, cohort studies, and feasibility studies. Random and fixed-effect models were used to generate pooled relative risk estimates of IFD detection, IFD-related mortality, overall mortality, and rates and duration of antifungal therapy. Heterogeneity was measured via Cochran's Q test, I2 statistic, and between study τ2. Incorporating these parameters and direct costs of drugs and diagnostic testing, we constructed a comparative costing model for the two strategies. We conducted probabilistic sensitivity analysis on pooled estimates and one-way sensitivity analyses on other key parameters with uncertain estimates. Nine published studies met inclusion criteria. Compared to empirical antifungal therapy, pre-emptive strategies were associated with significantly lower antifungal exposure (RR 0.48, 95% CI 0.27-0.85) and duration without an increase in IFD-related mortality (RR 0.82, 95% CI 0.36-1.87) or overall mortality (RR 0.95, 95% CI 0.46-1.99). The pre-emptive strategy cost $324 less (95% credible interval -$291.88 to $418.65 pre-emptive compared to empirical) than the empirical approach per FN episode. However, the cost difference was influenced by relatively small changes in costs of antifungal therapy and diagnostic testing. Compared to empirical antifungal therapy, pre-emptive antifungal therapy in patients with high-risk FN may decrease antifungal use without increasing mortality. We demonstrate a state of economic equipoise between empirical and diagnostic-directed pre-emptive antifungal treatment strategies, influenced by small changes in cost of antifungal therapy and diagnostic testing, in the current literature. This work emphasizes the need for optimization of existing fungal diagnostic strategies, development of more efficient diagnostic strategies, and less toxic and more cost-effective antifungals.
Wang, Siying; Peng, Liubao; Li, Jianhe; Zeng, Xiaohui; Ouyang, Lihui; Tan, Chongqing; Lu, Qiong
2013-01-01
Introduction Lung cancer, the most prevalent malignant cancer in the world, remains a serious threat to public health. Recently, a large number of studies have shown that an epidermoid growth factor receptor-tyrosine kinase inhibitor (EGFR TKI), Erlotinib, has significantly better efficacy and is better tolerated in advanced non-small cell lung cancer (NSCLC) patients with a positive EGFR gene mutation. However, access to this drug is severely limited in China due to its high acquisition cost. Therefore, we decided to conduct a study to compare cost-effectiveness between erlotinib monotherapy and carboplatin-gemcitabine (CG) combination therapy in patients with advanced EGFR mutation-positive NSCLC. Methods A Markov model was developed from the perspective of the Chinese health care system to evaluate the cost-effectiveness of the two treatment strategies; this model was based on data from the OPTIMAL trial, which was undertaken at 22 centres in China. The 10-year quality-adjusted life years (QALYs), direct costs, and incremental cost-effectiveness ratio (ICER) were estimated. To allow for uncertainties within the parameters and to estimate the model robustness, one-way sensitivity analysis and probabilistic sensitivity analysis were performed. Results The median progression-free survival (PFS) obtained from Markov model was 13.2 months (13.1 months was reported in the trial) in the erlotinib group while and 4.64 months (4.6 months was reported in the trial) in the CG group. The QALYs were 1.4 years in the erlotinib group and 1.96 years in the CG group, indicating difference of 0.56 years. The ICER was most sensitive to the health utility of DP ranged from $58,584.57 to $336,404.2. At a threshold of $96,884, erlotinib had a 50%probability of being cost-effective. Conclusions Erlotinib monotherapy is more cost-effective compared with platinum-based doublets chemotherapy as a first-line therapy for advanced EGFR mutation- positive NSCLC patients from within the Chinese health care system. PMID:23520448
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2015-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2014-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
A duality framework for stochastic optimal control of complex systems
Malikopoulos, Andreas A.
2016-01-01
In this study, we address the problem of minimizing the long-run expected average cost of a complex system consisting of interactive subsystems. We formulate a multiobjective optimization problem of the one-stage expected costs of the subsystems and provide a duality framework to prove that the control policy yielding the Pareto optimal solution minimizes the average cost criterion of the system. We provide the conditions of existence and a geometric interpretation of the solution. For practical situations having constraints consistent with those studied here, our results imply that the Pareto control policy may be of value when we seek to derivemore » online the optimal control policy in complex systems.« less
New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program
NASA Technical Reports Server (NTRS)
Strain, D.; Levy, R.
1986-01-01
The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.
Optimizing sterilization logistics in hospitals.
van de Klundert, Joris; Muls, Philippe; Schadd, Maarten
2008-03-01
This paper deals with the optimization of the flow of sterile instruments in hospitals which takes place between the sterilization department and the operating theatre. This topic is especially of interest in view of the current attempts of hospitals to cut cost by outsourcing sterilization tasks. Oftentimes, outsourcing implies placing the sterilization unit at a larger distance, hence introducing a longer logistic loop, which may result in lower instrument availability, and higher cost. This paper discusses the optimization problems that have to be solved when redesigning processes so as to improve material availability and reduce cost. We consider changing the logistic management principles, use of visibility information, and optimizing the composition of the nets of sterile materials.
Optimal Design and Operation of Permanent Irrigation Systems
NASA Astrophysics Data System (ADS)
Oron, Gideon; Walker, Wynn R.
1981-01-01
Solid-set pressurized irrigation system design and operation are studied with optimization techniques to determine the minimum cost distribution system. The principle of the analysis is to divide the irrigation system into subunits in such a manner that the trade-offs among energy, piping, and equipment costs are selected at the minimum cost point. The optimization procedure involves a nonlinear, mixed integer approach capable of achieving a variety of optimal solutions leading to significant conclusions with regard to the design and operation of the system. Factors investigated include field geometry, the effect of the pressure head, consumptive use rates, a smaller flow rate in the pipe system, and outlet (sprinkler or emitter) discharge.
NASA Astrophysics Data System (ADS)
Rodriguez-Pretelin, A.; Nowak, W.
2017-12-01
For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.
Application of a territorial-based filtering algorithm in turbomachinery blade design optimization
NASA Astrophysics Data System (ADS)
Bahrami, Salman; Khelghatibana, Maryam; Tribes, Christophe; Yi Lo, Suk; von Fellenberg, Sven; Trépanier, Jean-Yves; Guibault, François
2017-02-01
A territorial-based filtering algorithm (TBFA) is proposed as an integration tool in a multi-level design optimization methodology. The design evaluation burden is split between low- and high-cost levels in order to properly balance the cost and required accuracy in different design stages, based on the characteristics and requirements of the case at hand. TBFA is in charge of connecting those levels by selecting a given number of geometrically different promising solutions from the low-cost level to be evaluated in the high-cost level. Two test case studies, a Francis runner and a transonic fan rotor, have demonstrated the robustness and functionality of TBFA in real industrial optimization problems.
Fifty years of chasing lizards: new insights advance optimal escape theory.
Samia, Diogo S M; Blumstein, Daniel T; Stankowich, Theodore; Cooper, William E
2016-05-01
Systematic reviews and meta-analyses often examine data from diverse taxa to identify general patterns of effect sizes. Meta-analyses that focus on identifying generalisations in a single taxon are also valuable because species in a taxon are more likely to share similar unique constraints. We conducted a comprehensive phylogenetic meta-analysis of flight initiation distance in lizards. Flight initiation distance (FID) is a common metric used to quantify risk-taking and has previously been shown to reflect adaptive decision-making. The past decade has seen an explosion of studies focused on quantifying FID in lizards, and, because lizards occur in a wide range of habitats, are ecologically diverse, and are typically smaller and differ physiologically from the better studied mammals and birds, they are worthy of detailed examination. We found that variables that reflect the costs or benefits of flight (being engaged in social interactions, having food available) as well as certain predator effects (predator size and approach speed) had large effects on FID in the directions predicted by optimal escape theory. Variables that were associated with morphology (with the exception of crypsis) and physiology had relatively small effects, whereas habitat selection factors typically had moderate to large effect sizes. Lizards, like other taxa, are very sensitive to the costs of flight. © 2015 Cambridge Philosophical Society.
Economical and ecological comparison of granular activated carbon (GAC) adsorber refill strategies.
Bayer, Peter; Heuer, Edda; Karl, Ute; Finkel, Michael
2005-05-01
Technical constraints can leave a considerable freedom in the design of a technology, production or service strategy. Choosing between economical or ecological decision criteria then characteristically leads to controversial solutions of ideal systems. For the adaptation of granular-activated carbon (GAC) fixed beds, various technical factors determine the adsorber volume required to achieve a desired service life. In considering carbon replacement and recycling, a variety of refill strategies are available that differ in terms of refill interval, respective adsorber volume, and time-dependent use of virgin, as well as recycled GAC. Focusing on the treatment of contaminant groundwater, we compare cost-optimal reactor configurations and refill strategies to the ecologically best alternatives. Costs and consumption of GAC are quantified within a technical-economical framework. The emissions from GAC production out of hard coal, transport and recycling are equally derived through a life cycle impact assessment. It is shown how high discount rates lead to a preference of small fixed-bed volumes, and accordingly, a high number of refills. For fixed discount rates, the investigation reveals that both the economical as well as ecological assessment of refill strategies are especially sensitive to the relative valuation of virgin and recycled GAC. Since recycling results in economic and ecological benefits, optimized systems thus may differ only slightly.
Power density measurements to optimize AC plasma jet operation in blood coagulation.
Ahmed, Kamal M; Eldeighdye, Shaimaa M; Allam, Tarek M; Hassanin, Walaa F
2018-06-14
In this paper, the plasma power density and corresponding plasma dose of a low-cost air non-thermal plasma jet (ANPJ) device are estimated at different axial distances from the nozzle. This estimation is achieved by measuring the voltage and current at the substrate using diagnostic techniques that can be easily made in laboratory; thin wire and dielectric probe, respectively. This device uses a compressed air as input gas instead of the relatively-expensive, large-sized and heavy weighed tanks of Ar or He gases. The calculated plasma dose is found to be very low and allows the presented device to be used in biomedical applications (especially blood coagulation). While plasma active species and charged-particles are found to be the most effective on blood coagulation formation, both air flow and UV, individually, do not have any effect. Moreover, optimal conditions for accelerating blood coagulation are studied. Results showed that, the power density at the substrate is shown to be decreased with increasing the distance from the nozzle. In addition, both distances from nozzle and air flow rate play an important role in accelerating blood coagulation process. Finally, this device is efficient, small-sized, safe enough, of low cost and, hence, has its chances to be wide spread as a first aid and in ambulance.
NASA Astrophysics Data System (ADS)
Dana, Aykutlu; Ayas, Sencer; Bakan, Gokhan; Ozgur, Erol; Guner, Hasan; Celebi, Kemal
2016-09-01
Infrared absorption spectroscopy has greatly benefited from the electromagnetic field enhancement offered by plasmonic surfaces. However, because of the localized nature of plasmonic fields, such field enhancements are limited to nm-scale volumes. Here, we demonstrate that a relatively small, but spatially-uniform field enhancement can yield a superior infrared detection performance compared to the plasmonic field enhancement exhibited by optimized infrared nanoantennas. A specifically designed CaF2/Al thin film surface is shown to enable observation of stronger vibrational signals from the probe material, with wider bandwidth and a deeper spatial extent of the field enhancement as compared to optimized plasmonic surfaces. It is demonstrated that the surface structure presented here can enable chemically specific and label-free detection of organic monolayers using surface enhanced infrared spectroscopy. Also, a low cost hand held infrared absorption measurement setup is demonstrated using a miniature bolometric sensor and a mobile phone. A specifically designed grating in combination with an IR light source yields an IR spectrometer covering 7-12 um range, with about 100 cm-1 resolution. Combining the enhancing substrates with the spectroscopy setup, low cost, high sensitivity mobile infrared sensing is enabled. The results have implications in homeland security and environmental monitoring as well as chemical analysis.
Computer Assisted Design, Prediction, and Execution of Economical Organic Syntheses
NASA Astrophysics Data System (ADS)
Gothard, Nosheen Akber
The synthesis of useful organic molecules via simple and cost-effective routes is a core challenge in organic chemistry. In industry or academia, organic chemists use their chemical intuition, technical expertise and published procedures to determine an optimal pathway. This approach, not only takes time and effort, but also is cost prohibitive. Many potential optimal routes scratched on paper fail to get experimentally tested. In addition, with new methods being discovered daily are often overlooked by established techniques. This thesis reports a computational technique that assist the discovery of economical synthetic routes to useful organic targets. Organic chemistry exists as a network where chemicals are connected by reactions, analogous to citied connected by roads in a geographic map. This network topology of organic reactions in the network of organic chemistry (NOC) allows the application of graph-theory to devise algorithms for synthetic optimization of organic targets. A computational approach comprised of customizable algorithms, pre-screening filters, and existing chemoinformatic techniques is capable of answering complex questions and perform mechanistic tasks desired by chemists such as optimization of organic syntheses. One-pot reactions are central to modern synthesis since they save resources and time by avoiding isolation, purification, characterization, and production of chemical waste after each synthetic step. Sometimes, such reactions are identified by chance or, more often, by careful inspection of individual steps that are to be wired together. Algorithms are used to discover one-pot reactions and validated experimentally. Which demonstrate that the computationally predicted sequences can indeed by carried out experimentally in good overall yields. The experimental examples are chosen to from small networks of reactions around useful chemicals such as quinoline scaffolds, quinoline-based inhibitors of phosphoinositide 3-kinase delta (PI3Kdelta) enzyme, and thiophene derivatives. In this way, we replace individual synthetic connections with two-, three-, or even four-step one-pot sequences. Lastly, the computational method is utilized to devise hypothetical synthetic route to popular pharmaceutical drugs like NaproxenRTM and TaxolRTM. The algorithmically generated optimal pathways are evaluated with chemistry logic. By applying labor/cost factor It was revealed that not all shorter synthesis routes are economical, sometimes "Longest way round is the shortest way home" lengthier routes are found to be more economical and environmentally friendly.
A New Distributed Optimization for Community Microgrids Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starke, Michael R; Tomsovic, Kevin
This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling,more » energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.« less
Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L
2016-07-15
Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cha, E; Bar, D; Hertl, J A; Tauer, L W; Bennett, G; González, R N; Schukken, Y H; Welcome, F L; Gröhn, Y T
2011-09-01
The objective of this study was to estimate the cost of 3 different types of clinical mastitis (CM) (caused by gram-positive bacteria, gram-negative bacteria, and other organisms) at the individual cow level and thereby identify the economically optimal management decision for each type of mastitis. We made modifications to an existing dynamic optimization and simulation model, studying the effects of various factors (incidence of CM, milk loss, pregnancy rate, and treatment cost) on the cost of different types of CM. The average costs per case (US$) of gram-positive, gram-negative, and other CM were $133.73, $211.03, and $95.31, respectively. This model provided a more informed decision-making process in CM management for optimal economic profitability and determined that 93.1% of gram-positive CM cases, 93.1% of gram-negative CM cases, and 94.6% of other CM cases should be treated. The main contributor to the total cost per case was treatment cost for gram-positive CM (51.5% of the total cost per case), milk loss for gram-negative CM (72.4%), and treatment cost for other CM (49.2%). The model affords versatility as it allows for parameters such as production costs, economic values, and disease frequencies to be altered. Therefore, cost estimates are the direct outcome of the farm-specific parameters entered into the model. Thus, this model can provide farmers economically optimal guidelines specific to their individual cows suffering from different types of CM. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Nair, Harish; Verma, Vasundhara R; Theodoratou, Evropi; Zgaga, Lina; Huda, Tanvir; Simões, Eric A F; Wright, Peter F; Rudan, Igor; Campbell, Harry
2011-04-13
Respiratory Syncytial Virus (RSV) is the leading cause of acute lower respiratory infections (ALRI) in children. It is estimated to cause approximately 33.8 million new episodes of ALRI in children annually, 96% of these occurring in developing countries. It is also estimated to result in about 53,000 to 199,000 deaths annually in young children. Currently there are several vaccine and immunoprophylaxis candidates against RSV in the developmental phase targeting active and passive immunization. We used a modified CHNRI methodology for setting priorities in health research investments. This was done in two stages. In Stage I, we systematically reviewed the literature related to emerging vaccines against RSV relevant to 12 criteria of interest. In Stage II, we conducted an expert opinion exercise by inviting 20 experts (leading basic scientists, international public health researchers, international policy makers and representatives of pharmaceutical companies). The policy makers and industry representatives accepted our invitation on the condition of anonymity, due to the sensitive nature of their involvement in such exercises. They answered questions from the CHNRI framework and their "collective optimism" towards each criterion was documented on a scale from 0 to 100%. In the case of candidate vaccines for active immunization of infants against RSV, the experts expressed very low levels of optimism for low product cost, affordability and low cost of development; moderate levels of optimism regarding the criteria of answerability, likelihood of efficacy, deliverability, sustainability and acceptance to end users for the interventions; and high levels of optimism regarding impact on equity and acceptance to health workers. While considering the candidate vaccines targeting pregnant women, the panel expressed low levels of optimism for low product cost, affordability, answerability and low development cost; moderate levels of optimism for likelihood of efficacy, deliverability, sustainability and impact on equity; high levels of optimism regarding acceptance to end users and health workers. The group also evaluated immunoprophylaxis against RSV using monoclonal antibodies and expressed no optimism towards low product cost; very low levels of optimism regarding deliverability, affordability, sustainability, low implementation cost and impact on equity; moderate levels of optimism against the criteria of answerability, likelihood of efficacy, acceptance to end-users and health workers; and high levels of optimism regarding low development cost. They felt that either of these vaccines would have a high impact on reducing burden of childhood ALRI due to RSV and reduce the overall childhood ALRI burden by a maximum of about 10%. Although monoclonal antibodies have proven to be effective in providing protection to high-risk infants, their introduction in resource poor settings might be limited by high cost associated with them. Candidate vaccines for active immunization of infants against RSV hold greatest promise. Introduction of a low cost vaccine against RSV would reduce the inequitable distribution of burden due to childhood ALRI and will most likely have a high impact on morbidity and mortality due to severe ALRI.
NASA Technical Reports Server (NTRS)
1974-01-01
Weight and cost optimized EOS communication links are determined for 2.25, 7.25, 14.5, 21, and 60 GHz systems and for a 10.6 micron homodyne detection laser system. EOS to ground links are examined for 556, 834, and 1112 km EOS orbits, with ground terminals at the Network Test and Tracking Facility and at Goldstone. Optimized 21 GHz and 10.6 micron links are also examined. For the EOS to Tracking and Data Relay Satellite to ground link, signal-to-noise ratios of the uplink and downlink are also optimized for minimum overall cost or spaceborne weight. Finally, the optimized 21 GHz EOS to ground link is determined for various precipitation rates. All system performance parameters and mission dependent constraints are presented, as are the system cost and weight functional dependencies. The features and capabilities of the computer program to perform the foregoing analyses are described.
Optimizing water purchases for an Environmental Water Account
NASA Astrophysics Data System (ADS)
Lund, J. R.; Hollinshead, S. P.
2005-12-01
State and federal agencies in California have established an Environmental Water Account (EWA) to buy water to protect endangered fish in the San Francisco Bay/ Sacramento-San Joaquin Delta Estuary. This paper presents a three-stage probabilistic optimization model that identifies least-cost strategies for purchasing water for the EWA given hydrologic, operational, and biological uncertainties. This approach minimizes the expected cost of long-term, spot, and option water purchases to meet uncertain flow dedications for fish. The model prescribes the location, timing, and type of optimal water purchases and can illustrate how least-cost strategies change with hydrologic, operational, biological, and cost inputs. Details of the optimization model's application to California's EWA are provided with a discussion of its utility for strategic planning and policy purposes. Limitations in and sensitivity analysis of the model's representation of EWA operations are discussed, as are operational and research recommendations.
Brown, Zachary S.; Dickinson, Katherine L.; Kramer, Randall A.
2014-01-01
The evolutionary dynamics of insecticide resistance in harmful arthropods has economic implications, not only for the control of agricultural pests (as has been well studied), but also for the control of disease vectors, such as malaria-transmitting Anopheles mosquitoes. Previous economic work on insecticide resistance illustrates the policy relevance of knowing whether insecticide resistance mutations involve fitness costs. Using a theoretical model, this article investigates economically optimal strategies for controlling malaria-transmitting mosquitoes when there is the potential for mosquitoes to evolve resistance to insecticides. Consistent with previous literature, we find that fitness costs are a key element in the computation of economically optimal resistance management strategies. Additionally, our models indicate that different biological mechanisms underlying these fitness costs (e.g., increased adult mortality and/or decreased fecundity) can significantly alter economically optimal resistance management strategies. PMID:23448053
Design and optimization of all-optical networks
NASA Astrophysics Data System (ADS)
Xiao, Gaoxi
1999-10-01
In this thesis, we present our research results on the design and optimization of all-optical networks. We divide our results into the following four parts: 1.In the first part, we consider broadcast-and-select networks. In our research, we propose an alternative and cheaper network configuration to hide the tuning time. In addition, we derive lower bounds on the optimal schedule lengths and prove that they are tighter than the best existing bounds. 2.In the second part, we consider all-optical wide area networks. We propose a set of algorithms for allocating a given number of WCs to the nodes. We adopt a simulation-based optimization approach, in which we collect utilization statistics of WCs from computer simulation and then perform optimization to allocate the WCs. Therefore, our algorithms are widely applicable and they are not restricted to any particular model and assumption. We have conducted extensive computer simulation on regular and irregular networks under both uniform and non-uniform traffic. We see that our method can get nearly the same performance as that of full wavelength conversion by using a much smaller number of WCs. Compared with the best existing method, the results show that our algorithms can significantly reduce (1)the overall blocking probability (i.e., better mean quality of service) and (2)the maximum of the blocking probabilities experienced at all the source nodes (i.e., better fairness). Equivalently, for a given performance requirement on blocking probability, our algorithms can significantly reduce the number of WCs required. 3.In the third part, we design and optimize the physical topology of all-optical wide area networks. We show that the design problem is NP-complete and we propose a heuristic algorithm called two-stage cut saturation algorithm for this problem. Simulation results show that (1)the proposed algorithm can efficiently design networks with low cost and high utilization, and (2)if wavelength converters are available to support full wavelength conversion, the cost of the links can be significantly reduced. 4.In the fourth part, we consider all-optical wide area networks with multiple fibers per link. We design a node configuration for all-optical networks. We exploit the flexibility that, to establish a lightpath across a node, we can select any one of the available channels in the incoming link and any one of the available channels in the outgoing link. As a result, the proposed node configuration requires a small number of small optical switches while it can achieve nearly the same performance as the existing one. And there is no additional crosstalk other than the intrinsic crosstalk within each single-chip optical switch.* (Abstract shortened by UMI.) *Originally published in DAI Vol. 60, No. 2. Reprinted here with corrected author name.
Neilson, Aileen R; Bruhn, Hanne; Bond, Christine M; Elliott, Alison M; Smith, Blair H; Hannaford, Philip C; Holland, Richard; Lee, Amanda J; Watson, Margaret; Wright, David; McNamee, Paul
2015-04-01
To explore differences in mean costs (from a UK National Health Service perspective) and effects of pharmacist-led management of chronic pain in primary care evaluated in a pilot randomised controlled trial (RCT), and to estimate optimal sample size for a definitive RCT. Regression analysis of costs and effects, using intention-to-treat and expected value of sample information analysis (EVSI). Six general practices: Grampian (3); East Anglia (3). 125 patients with complete resource use and short form-six-dimension questionnaire (SF-6D) data at baseline, 3 months and 6 months. Patients were randomised to either pharmacist medication review with face-to-face pharmacist prescribing or pharmacist medication review with feedback to general practitioner or treatment as usual (TAU). Differences in mean total costs and effects measured as quality-adjusted life years (QALYs) at 6 months and EVSI for sample size calculation. Unadjusted total mean costs per patient were £452 for prescribing (SD: £466), £570 for review (SD: £527) and £668 for TAU (SD: £1333). After controlling for baseline costs, the adjusted mean cost differences per patient relative to TAU were £77 for prescribing (95% CI -82 to 237) and £54 for review (95% CI -103 to 212). Unadjusted mean QALYs were 0.3213 for prescribing (SD: 0.0659), 0.3161 for review (SD: 0.0684) and 0.3079 for TAU (SD: 0.0606). Relative to TAU, the adjusted mean differences were 0.0069 for prescribing (95% CI -0.0091 to 0.0229) and 0.0097 for review (95% CI -0.0054 to 0.0248). The EVSI suggested the optimal future trial size was between 460 and 690, and between 540 and 780 patients per arm using a threshold of £30,000 and £20,000 per QALY gained, respectively. Compared with TAU, pharmacist-led interventions for chronic pain appear more costly and provide similar QALYs. However, these estimates are imprecise due to the small size of the pilot trial. The EVSI indicates that a larger trial is necessary to obtain more precise estimates of differences in mean effects and costs between treatment groups. ISRCTN06131530. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.
Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S
2017-01-01
Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.
Development of 90 kgf Class CAMUI Hybrid Rocket for a CanSat Experiment
NASA Astrophysics Data System (ADS)
Nagata, Harunori; Uematsu, Tsutomu; Ito, Mitsunori; Kakikura, Akihito; Kaneko, Yudai; Mori, Kazuhiro; Murai, Norikazu; Sato, Tatsuhiro; Mitsuhashi, Ryuichi; Totani, Tsuyoshi
A newly designed CAMUI hybrid rocket motor of 900 N (90 kgf) thrust class, CAMUI-90, was developed. It uses a combination of polyethylene and liquid oxygen as propellants. CAMUI hybrid rocket is an explosive-flee small rocket motor to realize a small launch system with low cost and flexibility. The motor produces a thrust of 900 N for four seconds, keeping the optimal characteristic exhaust velocity of the fuel-oxidizer combination (exceeding 1800 m/s). A main application of the CAMUI-90 motor is for a CanSat experiment. A launch vehicle employing CAMUI-90 motor, 120 mm in diameter and 3.05 m in length, accelerates a payload of 500 g to 140 m/s in four seconds and reaches to an altitude of about 1 km. The first launch of this vehicle was on December 2006.
NASA Astrophysics Data System (ADS)
Frotscher, M.; Kahleyss, F.; Simon, T.; Biermann, D.; Eggeler, G.
2011-07-01
NiTi shape memory alloys (SMA) are used for a variety of applications including medical implants and tools as well as actuators, making use of their unique properties. However, due to the hardness and strength, in combination with the high elasticity of the material, the machining of components can be challenging. The most common machining techniques used today are laser cutting and electrical discharge machining (EDM). In this study, we report on the machining of small structures into binary NiTi sheets, applying alternative processing methods being well-established for other metallic materials. Our results indicate that water jet machining and micro milling can be used to machine delicate structures, even in very thin NiTi sheets. Further work is required to optimize the cut quality and the machining speed in order to increase the cost-effectiveness and to make both methods more competitive.
Modeling of plasma in a hybrid electric propulsion for small satellites
NASA Astrophysics Data System (ADS)
Jugroot, Manish; Christou, Alex
2016-09-01
As space flight becomes more available and reliable, space-based technology is allowing for smaller and more cost-effective satellites to be produced. Working in large swarms, many small satellites can provide additional capabilities while reducing risk. These satellites require efficient, long term propulsion for manoeuvres, orbit maintenance and de-orbiting. The high exhaust velocity and propellant efficiency of electric propulsion makes it ideally suited for low thrust missions. The two dominant types of electric propulsion, namely ion thrusters and Hall thrusters, excel in different mission types. In this work, a novel electric hybrid propulsion design is modelled to enhance understanding of key phenomena and evaluate performance. Specifically, the modelled hybrid thruster seeks to overcome issues with existing Ion and Hall thruster designs. Scaling issues and optimization of the design will be discussed and will investigate a conceptual design of a hybrid spacecraft plasma engine.
Waveguide-Mode Terahertz Free Electron Lasers Driven by Magnetron-Based Microtrons
NASA Astrophysics Data System (ADS)
Jeong, Young Uk; Miginsky, Sergey; Gudkov, Boris; Lee, Kitae; Mun, Jungho; Shim, Gyu Il; Bae, Sangyoon; Kim, Hyun Woo; Jang, Kyu-Ha; Park, Sunjeong; Park, Seong Hee; Vinokurov, Nikolay
2016-04-01
We have developed small-sized terahertz free-electron lasers by using low-cost and compact microtrons combining with magnetrons as high-power RF sources. We could stabilize the bunch repetition rate by optimizing a modulator for the magnetron and by coupling the magnetron with an accelerating cavity in the microtron. By developing high-performance undulators and low-loss waveguide-mode resonators having small cross-sectional areas, we could strengthen the interaction between the electron beam and the THz wave inside the FEL resonators to achieve lasing even with low-current electron beams from the microtron. We used a parallel-plate waveguide in a planar electromagnet undulator for our first THz FEL. We try to reduce the size of the FEL resonator by combining a dielectric-coated circular waveguide and a variable-period helical undulator to realize a table-top THz FEL for applying it to the security inspection on airports.
Ward, Alexandra; Bozkaya, Duygu; Fleischmann, Jochen; Dubois, Dominique; Sabatowski, Rainer; Caro, J Jaime
2007-10-01
The Osmotic controlled-Release Oral delivery System (OROS) hydromorphone ensures continuous release of hydromorphone over 24 hours. It is anticipated that this will facilitate optimal pain relief, improve quality of sleep and compliance. This simulation compared managing chronic osteoarthritis pain with once-daily OROS hydromorphone with an equianalgesic dose of extended-release (ER) oxycodone administered two or three times a day. This discrete event simulation follows patients for a year after initiating opioid treatment. Pairs of identical patients are created; one receives OROS hydromorphone the other ER oxycodone; undergo dose adjustments and after titration can be dissatisfied or satisfied, suffer adverse events, pain recurrence, or discontinue the opioid. Each is assigned an initial sleep problems score, and an improved score from a treatment dependent distribution at the end of titration; these are translated to a utility value. Utilities are assigned pre-treatment, updated until the patient reaches the optimal dose or is non-compliant or dissatisfied. The OROS hydromorphone and ER oxycodone doses are converted to equianalgesic morphine doses using the following ratios: hydromorphone to morphine ratio; 1:5, oxycodone to morphine ratio; 1:2. Sensitivity analyses explored uncertainty in the conversion ratios and other key parameters. Direct medical costs are in 2005 euros. Over 1 year on a mean daily morphine-equivalent dose of 90 mg, 14% were estimated to be dissatisfied with each opioid. OROS hydromorphone was predicted to yield 0.017 additional quality-adjusted life years (QALYs)/patient for a small additional annual cost (E141/patient), yielding an incremental cost-effectiveness ratio (ICER) of E8343/QALY gained. Changing the assumed conversion ratio for oxycodone:morphine to 1:1.5 led to lower net costs of E68 per patient, E3979/QALY, and for hydromorphone to 1:7.5 to savings. Based on these analyses, OROS hydromorphone is expected to yield health benefits at reasonable cost in Germany.
Power management of remote microgrids considering battery lifetime
NASA Astrophysics Data System (ADS)
Chalise, Santosh
Currently, 20% (1.3 billion) of the world's population still lacks access to electricity and many live in remote areas where connection to the grid is not economical or practical. Remote microgrids could be the solution to the problem because they are designed to provide power for small communities within clearly defined electrical boundaries. Reducing the cost of electricity for remote microgrids can help to increase access to electricity for populations in remote areas and developing countries. The integration of renewable energy and batteries in diesel based microgrids has shown to be effective in reducing fuel consumption. However, the operational cost remains high due to the low lifetime of batteries, which are heavily used to improve the system's efficiency. In microgrid operation, a battery can act as a source to augment the generator or a load to ensure full load operation. In addition, a battery increases the utilization of PV by storing extra energy. However, the battery has a limited energy throughput. Therefore, it is required to provide balance between fuel consumption and battery lifetime throughput in order to lower the cost of operation. This work presents a two-layer power management system for remote microgrids. First layer is day ahead scheduling, where power set points of dispatchable resources were calculated. Second layer is real time dispatch, where schedule set points from the first layer are accepted and resources are dispatched accordingly. A novel scheduling algorithm is proposed for a dispatch layer, which considers the battery lifetime in optimization and is expected to reduce the operational cost of the microgrid. This method is based on a goal programming approach which has the fuel and the battery wear cost as two objectives to achieve. The effectiveness of this method was evaluated through a simulation study of a PV-diesel hybrid microgrid using deterministic and stochastic approach of optimization.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
De Vilmorin, Philippe; Slocum, Ashley; Jaber, Tareq; Schaefer, Oliver; Ruppach, Horst; Genest, Paul
2015-01-01
This article describes a four virus panel validation of EMD Millipore's (Bedford, MA) small virus-retentive filter, Viresolve® Pro, using TrueSpike(TM) viruses for a Biogen Idec process intermediate. The study was performed at Charles River Labs in King of Prussia, PA. Greater than 900 L/m(2) filter throughput was achieved with the approximately 8 g/L monoclonal antibody feed. No viruses were detected in any filtrate samples. All virus log reduction values were between ≥3.66 and ≥5.60. The use of TrueSpike(TM) at Charles River Labs allowed Biogen Idec to achieve a more representative scaled-down model and potentially reduce the cost of its virus filtration step and the overall cost of goods. The body of data presented here is an example of the benefits of following the guidance from the PDA Technical Report 47, The Preparation of Virus Spikes Used for Viral Clearance Studies. The safety of biopharmaceuticals is assured through the use of multiple steps in the purification process that are capable of virus clearance, including filtration with virus-retentive filters. The amount of virus present at the downstream stages in the process is expected to be and is typically low. The viral clearance capability of the filtration step is assessed in a validation study. The study utilizes a small version of the larger manufacturing size filter, and a large, known amount of virus is added to the feed prior to filtration. Viral assay before and after filtration allows the virus log reduction value to be quantified. The representativeness of the small-scale model is supported by comparing large-scale filter performance to small-scale filter performance. The large-scale and small-scale filtration runs are performed using the same operating conditions. If the filter performance at both scales is comparable, it supports the applicability of the virus log reduction value obtained with the small-scale filter to the large-scale manufacturing process. However, the virus preparation used to spike the feed material often contains impurities that contribute adversely to virus filter performance in the small-scale model. The added impurities from the virus spike, which are not present at manufacturing scale, compromise the scale-down model and put into question the direct applicability of the virus clearance results. Another consequence of decreased filter performance due to virus spike impurities is the unnecessary over-sizing of the manufacturing system to match the low filter capacity observed in the scale-down model. This article describes how improvements in mammalian virus spike purity ensure the validity of the log reduction value obtained with the scale-down model and support economically optimized filter usage. © PDA, Inc. 2015.
Niu, Xun; Terekhov, Alexander V.; Latash, Mark L.; Zatsiorsky, Vladimir M.
2013-01-01
The goal of the research is to reconstruct the unknown cost (objective) function(s) presumably used by the neural controller for sharing the total force among individual fingers in multi-finger prehension. The cost function was determined from experimental data by applying the recently developed Analytical Inverse Optimization (ANIO) method (Terekhov et al 2010). The core of the ANIO method is the Theorem of Uniqueness that specifies conditions for unique (with some restrictions) estimation of the objective functions. In the experiment, subjects (n=8) grasped an instrumented handle and maintained it at rest in the air with various external torques, loads, and target grasping forces applied to the object. The experimental data recorded from 80 trials showed a tendency to lie on a 2-dimensional hyperplane in the 4-dimensional finger-force space. Because the constraints in each trial were different, such a propensity is a manifestation of a neural mechanism (not the task mechanics). In agreement with the Lagrange principle for the inverse optimization, the plane of experimental observations was close to the plane resulting from the direct optimization. The latter plane was determined using the ANIO method. The unknown cost function was reconstructed successfully for each performer, as well as for the group data. The cost functions were found to be quadratic with non-zero linear terms. The cost functions obtained with the ANIO method yielded more accurate results than other optimization methods. The ANIO method has an evident potential for addressing the problem of optimization in motor control. PMID:22104742
ERIC Educational Resources Information Center
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
Contingency Contractor Optimization Phase 3 Sustainment Cost by JCA Implementation Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durfee, Justin David; Frazier, Christopher Rawls; Arguello, Bryan
This document provides implementation guidance for implementing personnel group FTE costs by JCA Tier 1 or 2 categories in the Contingency Contractor Optimization Tool – Engineering Prototype (CCOT-P). CCOT-P currently only allows FTE costs by personnel group to differ by mission. Changes will need to be made to the user interface inputs pages and the database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Karaghouli, Ali; Kazmerski, L.L.
2010-04-15
This paper addresses the need for electricity of rural areas in southern Iraq and proposes a photovoltaic (PV) solar system to power a health clinic in that region. The total daily health clinic load is 31.6 kW h and detailed loads are listed. The National Renewable Energy Laboratory (NREL) optimization computer model for distributed power, ''HOMER,'' is used to estimate the system size and its life-cycle cost. The analysis shows that the optimal system's initial cost, net present cost, and electricity cost is US$ 50,700, US$ 60,375, and US$ 0.238/kW h, respectively. These values for the PV system are comparedmore » with those of a generator alone used to supply the load. We found that the initial cost, net present cost of the generator system, and electricity cost are US$ 4500, US$ 352,303, and US$ 1.332/kW h, respectively. We conclude that using the PV system is justified on humanitarian, technical, and economic grounds. (author)« less
Automated batch characterization of inkjet-printed elastomer lenses using a LEGO platform.
Sung, Yu-Lung; Garan, Jacob; Nguyen, Hoang; Hu, Zhenyu; Shih, Wei-Chuan
2017-09-10
Small, self-adhesive, inkjet-printed elastomer lenses have enabled smartphone cameras to image and resolve microscopic objects. However, the performance of different lenses within a batch is affected by hard-to-control environmental variables. We present a cost-effective platform to perform automated batch characterization of 300 lens units simultaneously for quality inspection. The system was designed and configured with LEGO bricks, 3D printed parts, and a digital camera. The scheme presented here may become the basis of a high-throughput, in-line inspection tool for quality control purposes and can also be employed for optimization of the manufacturing process.
Semiconductor laser insert with uniform illumination for use in photodynamic therapy
NASA Astrophysics Data System (ADS)
Charamisinau, Ivan; Happawana, Gemunu; Evans, Gary; Rosen, Arye; Hsi, Richard A.; Bour, David
2005-08-01
A low-cost semiconductor red laser light delivery system for esophagus cancer treatment is presented. The system is small enough for insertion into the patient's body. Scattering elements with nanoscale particles are used to achieve uniform illumination. The scattering element optimization calculations, with Mie theory, provide scattering and absorption efficiency factors for scattering particles composed of various materials. The possibility of using randomly deformed spheres and composite particles instead of perfect spheres is analyzed using an extension to Mie theory. The measured radiation pattern from a prototype light delivery system fabricated using these design criteria shows reasonable agreement with the theoretically predicted pattern.
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
NASA Astrophysics Data System (ADS)
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
Renton, Michael
2011-01-01
Background and aims Simulations that integrate sub-models of important biological processes can be used to ask questions about optimal management strategies in agricultural and ecological systems. Building sub-models with more detail and aiming for greater accuracy and realism may seem attractive, but is likely to be more expensive and time-consuming and result in more complicated models that lack transparency. This paper illustrates a general integrated approach for constructing models of agricultural and ecological systems that is based on the principle of starting simple and then directly testing for the need to add additional detail and complexity. Methodology The approach is demonstrated using LUSO (Land Use Sequence Optimizer), an agricultural system analysis framework based on simulation and optimization. A simple sensitivity analysis and functional perturbation analysis is used to test to what extent LUSO's crop–weed competition sub-model affects the answers to a number of questions at the scale of the whole farming system regarding optimal land-use sequencing strategies and resulting profitability. Principal results The need for accuracy in the crop–weed competition sub-model within LUSO depended to a small extent on the parameter being varied, but more importantly and interestingly on the type of question being addressed with the model. Only a small part of the crop–weed competition model actually affects the answers to these questions. Conclusions This study illustrates an example application of the proposed integrated approach for constructing models of agricultural and ecological systems based on testing whether complexity needs to be added to address particular questions of interest. We conclude that this example clearly demonstrates the potential value of the general approach. Advantages of this approach include minimizing costs and resources required for model construction, keeping models transparent and easy to analyse, and ensuring the model is well suited to address the question of interest. PMID:22476477
Botwright, Siobhan; Holroyd, Taylor; Nanda, Shreya; Bloem, Paul; Griffiths, Ulla K; Sidibe, Anissa; Hutubessy, Raymond C W
2017-01-01
From 2012 to 2016, Gavi, the Vaccine Alliance, provided support for countries to conduct small-scale demonstration projects for the introduction of the human papillomavirus vaccine, with the aim of determining which human papillomavirus vaccine delivery strategies might be effective and sustainable upon national scale-up. This study reports on the operational costs and cost determinants of different vaccination delivery strategies within these projects across twelve countries using a standardized micro-costing tool. The World Health Organization Cervical Cancer Prevention and Control Costing Tool was used to collect costing data, which were then aggregated and analyzed to assess the costs and cost determinants of vaccination. Across the one-year demonstration projects, the average economic and financial costs per dose amounted to US$19.98 (standard deviation ±12.5) and US$8.74 (standard deviation ±5.8), respectively. The greatest activities representing the greatest share of financial costs were social mobilization at approximately 30% (range, 6-67%) and service delivery at about 25% (range, 3-46%). Districts implemented varying combinations of school-based, facility-based, or outreach delivery strategies and experienced wide variation in vaccine coverage, drop-out rates, and service delivery costs, including transportation costs and per diems. Size of target population, number of students per school, and average length of time to reach an outreach post influenced cost per dose. Although the operational costs from demonstration projects are much higher than those of other routine vaccine immunization programs, findings from our analysis suggest that HPV vaccination operational costs will decrease substantially for national introduction. Vaccination costs may be decreased further by annual vaccination, high initial investment in social mobilization, or introducing/strengthening school health programs. Our analysis shows that drivers of cost are dependent on country and district characteristics. We therefore recommend that countries carry out detailed planning at the national and district levels to define a sustainable strategy for national HPV vaccine roll-out, in order to achieve the optimal balance between coverage and cost.
Dynamic optimal strategies in transboundary pollution game under learning by doing
NASA Astrophysics Data System (ADS)
Chang, Shuhua; Qin, Weihua; Wang, Xinyu
2018-01-01
In this paper, we present a transboundary pollution game, in which emission permits trading and pollution abatement costs under learning by doing are considered. In this model, the abatement cost mainly depends on the level of pollution abatement and the experience of using pollution abatement technology. We use optimal control theory to investigate the optimal emission paths and the optimal pollution abatement strategies under cooperative and noncooperative games, respectively. Additionally, the effects of parameters on the results have been examined.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on Small Business.
The text of a Senate Committee on Small Business hearing on the cost and availability of liability insurance for small business is presented in this document. The crisis faced by small business with skyrocketing insurance rates is described in statements by Senators Lowell Weicker, Jr., Robert Kasten, Jr., Dale Bumpers, Paul Trible, Jr., James…
Periasamy, Rathinasamy; Palvannan, Thayumanavan
2010-12-01
Production of laccase using a submerged culture of Pleurotus orstreatus IMI 395545 was optimized by the Taguchi orthogonal array (OA) design of experiments (DOE) methodology. This approach facilitates the study of the interactions of a large number of variables spanned by factors and their settings, with a small number of experiments, leading to considerable savings in time and cost for process optimization. This methodology optimizes the number of impact factors and enables to calculate their interaction in the production of industrial enzymes. Eight factors, viz. glucose, yeast extract, malt extract, inoculum, mineral solution, inducer (1 mM CuSO₄) and amino acid (l-asparagine) at three levels and pH at two levels, with an OA layout of L18 (2¹ × 3⁷) were selected for the proposed experimental design. The laccase yield obtained from the 18 sets of fermentation experiments performed with the selected factors and levels was further processed with Qualitek-4 software. The optimized conditions shared an enhanced laccase expression of 86.8% (from 485.0 to 906.3 U). The combination of factors was further validated for laccase production and reactive blue 221 decolorization. The results revealed an enhanced laccase yield of 32.6% and dye decolorization up to 84.6%. This methodology allows the complete evaluation of main and interaction factors. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
NASA Astrophysics Data System (ADS)
Li, You-Rong; Du, Mei-Tang; Wang, Jian-Ning
2012-12-01
This paper focuses on the research of an evaporator with a binary mixture of organic working fluids in the organic Rankine cycle. Exergoeconomic analysis and performance optimization were performed based on the first and second laws of thermodynamics, and the exergoeconomic theory. The annual total cost per unit heat transfer rate was introduced as the objective function. In this model, the exergy loss cost caused by the heat transfer irreversibility and the capital cost were taken into account; however, the exergy loss due to the frictional pressure drops, heat dissipation to surroundings, and the flow imbalance were neglected. The variation laws of the annual total cost with respect to the number of transfer units and the temperature ratios were presented. Optimal design parameters that minimize the objective function had been obtained, and the effects of some important dimensionless parameters on the optimal performances had also been discussed for three types of evaporator flow arrangements. In addition, optimal design parameters of evaporators were compared with those of condensers.
Iterative pass optimization of sequence data
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Launch Vehicle Propulsion Design with Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.
2005-01-01
The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.
Rabotyagov, Sergey; Campbell, Todd; Valcu, Adriana; Gassman, Philip; Jha, Manoj; Schilling, Keith; Wolter, Calvin; Kling, Catherine
2012-12-09
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,(5,12,20)) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods(3,4,9,10,13-15,17-19,22,23,25). In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model(7) with a multiobjective evolutionary algorithm SPEA2(26), and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Simulative design and process optimization of the two-stage stretch-blow molding process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-22
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less
Simulative design and process optimization of the two-stage stretch-blow molding process
NASA Astrophysics Data System (ADS)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-01
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.
Optimizing Teleportation Cost in Distributed Quantum Circuits
NASA Astrophysics Data System (ADS)
Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh
2018-03-01
The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.